safety precaution when do experiment ball miller and particle size analyzer

ball milling - an overview | sciencedirect topics

ball milling - an overview | sciencedirect topics

Ball milling is often used not only for grinding powders but also for oxides or nanocomposite synthesis and/or structure/phase composition optimization [14,41]. Mechanical activation by ball milling is known to increase the material reactivity and uniformity of spatial distribution of elements [63]. Thus, postsynthesis processing of the materials by ball milling can help with the problem of minor admixture forming during cooling under air after high-temperature sintering due to phase instability.

Ball milling technique, using mechanical alloying and mechanical milling approaches were proposed to the word wide in the 8th decade of the last century for preparing a wide spectrum of powder materials and their alloys. In fact, ball milling process is not new and dates back to more than 150 years. It has been used in size comminutions of ore, mineral dressing, preparing talc powders and many other applications. It might be interesting for us to have a look at the history and development of ball milling and the corresponding products. The photo shows the STEM-BF image of a Cu-based alloy nanoparticle prepared by mechanical alloying (After El-Eskandarany, unpublished work, 2014).

Ball milling, a shear-force dominant process where the particle size goes on reducing by impact and attrition mainly consists of metallic balls (generally Zirconia (ZrO2) or steel balls), acting as grinding media and rotating shell to create centrifugal force. In this process, graphite (precursor) was breakdown by randomly striking with grinding media in the rotating shell to create shear and compression force which helps to overcome the weak Vander Waal's interaction between the graphite layers and results in their splintering. Fig. 4A schematic illustrates ball milling process for graphene preparation. Initially, because of large size of graphite, compressive force dominates and as the graphite gets fragmented, shear force cleaves graphite to produce graphene. However, excessive compression force may damage the crystalline properties of graphene and hence needs to be minimized by controlling the milling parameters e.g. milling duration, milling revolution per minute (rpm), ball-to-graphite/powder ratio (B/P), initial graphite weight, ball diameter. High quality graphene can be achieved under low milling speed; though it will increase the processing time which is highly undesirable for large scale production.

Fig. 4. (A) Schematic illustration of graphene preparation via ball milling. SEM images of bulk graphite (B), GSs/E-H (C) GSs/K (D); (E) and (F) are the respective TEM images; (G) Raman spectra of bulk graphite versus GSs exfoliated via wet milling in E-H and K.

Milling of graphite layers can be instigated in two states: (i) dry ball milling (DBM) and (ii) wet ball milling (WBM). WBM process requires surfactant/solvent such as N,N Dimethylformamide (DMF) [22], N-methylpyrrolidone (NMP) [26], deionized (DI) water [27], potassium acetate [28], 2-ethylhexanol (E-H) [29] and kerosene (K) [29] etc. and is comparatively simpler as compared with DBM. Fig. 4BD show the scanning electron microscopy (SEM) images of bulk graphite, graphene sheets (GSs) prepared in E-H (GSs/E-H) and K (GSs/K), respectively; the corresponding transmission electron microscopy (TEM) images and the Raman spectra are shown in Fig. 4EG, respectively [29].

Compared to this, DBM requires several milling agents e.g. sodium chloride (NaCl) [30], Melamine (Na2SO4) [31,32] etc., along with the metal balls to reduce the stress induced in graphite microstructures, and hence require additional purification for exfoliant's removal. Na2SO4 can be easily washed away by hot water [19] while ammonia-borane (NH3BH3), another exfoliant used to weaken the Vander Waal's bonding between graphite layers can be using ethanol [33]. Table 1 list few ball milling processes carried out using various milling agent (in case of DBM) and solvents (WBM) under different milling conditions.

Ball milling of graphite with appropriate stabilizers is another mode of exfoliation in liquid phase.21 Graphite is ground under high sheer rates with millimeter-sized metal balls causing exfoliation to graphene (Fig. 2.5), under wet or dry conditions. For instance, this method can be employed to produce nearly 50g of graphene in the absence of any oxidant.22 Graphite (50g) was ground in the ball mill with oxalic acid (20g) in this method for 20 hours, but, the separation of unexfoliated fraction was not discussed.22 Similarly, solvent-free graphite exfoliations were carried out under dry milling conditions using KOH,23 ammonia borane,24 and so on. The list of graphite exfoliations performed using ball milling is given in Table 2.2. However, the metallic impurities from the machinery used for ball milling are a major disadvantage of this method for certain applications.25

Reactive ball-milling (RBM) technique has been considered as a powerful tool for fabrication of metallic nitrides and hydrides via room temperature ball milling. The flowchart shows the mechanism of gas-solid reaction through RBM that was proposed by El-Eskandarany. In his model, the starting metallic powders are subjected to dramatic shear and impact forces that are generated by the ball-milling media. The powders are, therefore, disintegrated into smaller particles, and very clean or fresh oxygen-free active surfaces of the powders are created. The reactive milling atmosphere (nitrogen or hydrogen gases) was gettered and absorbed completely by the first atomically clean surfaces of the metallic ball-milled powders to react in a same manner as a gas-solid reaction owing to the mechanically induced reactive milling.

Ball milling is a grinding method that grinds nanotubes into extremely fine powders. During the ball milling process, the collision between the tiny rigid balls in a concealed container will generate localized high pressure. Usually, ceramic, flint pebbles and stainless steel are used.25 In order to further improve the quality of dispersion and introduce functional groups onto the nanotube surface, selected chemicals can be included in the container during the process. The factors that affect the quality of dispersion include the milling time, rotational speed, size of balls and balls/ nanotube amount ratio. Under certain processing conditions, the particles can be ground to as small as 100nm. This process has been employed to transform carbon nanotubes into smaller nanoparticles, to generate highly curved or closed shell carbon nanostructures from graphite, to enhance the saturation of lithium composition in SWCNTs, to modify the morphologies of cup-stacked carbon nanotubes and to generate different carbon nanoparticles from graphitic carbon for hydrogen storage application.25 Even though ball milling is easy to operate and suitable for powder polymers or monomers, process-induced damage on the nanotubes can occur.

Ball milling is a way to exfoliate graphite using lateral force, as opposed to the Scotch Tape or sonication that mainly use normal force. Ball mills, like the three roll machine, are a common occurrence in industry, for the production of fine particles. During the ball milling process, there are two factors that contribute to the exfoliation. The main factor contributing is the shear force applied by the balls. Using only shear force, one can produce large graphene flakes. The secondary factor is the collisions that occur during milling. Harsh collisions can break these large flakes and can potentially disrupt the crystal structure resulting in a more amorphous mass. So in order to create good-quality, high-area graphene, the collisions have to be minimized.

The ball-milling process is common in grinding machines as well as in reactors where various functional materials can be created by mechanochemical synthesis. A simple milling process reduces both CO2 generation and energy consumption during materials production. Herein a novel mechanochemical approach 1-3) to produce sophisticated carbon nanomaterials is reported. It is demonstrated that unique carbon nanostructures including carbon nanotubes and carbon onions are synthesized by high-speed ball-milling of steel balls. It is considered that the gas-phase reaction takes place around the surface of steel balls under local high temperatures induced by the collision-friction energy in ball-milling process, which results in phase separated unique carbon nanomaterials.

Conventional ball milling is a traditional powder-processing technique, which is mainly used for reducing particle sizes and for the mixing of different materials. The technique is widely used in mineral, pharmaceutical, and ceramic industries, as well as scientific laboratories. The HEBM technique discussed in this chapter is a new technique developed initially for producing new metastable materials, which cannot be produced using thermal equilibrium processes, and thus is very different from conventional ball milling technique. HEBM was first reported by Benjamin [38] in the 1960s. So far, a large range of new materials has been synthesized using HEBM. For example, oxide-dispersion-strengthened alloys are synthesized using a powerful high-energy ball mill (attritor) because conventional ball mills could not provide sufficient grinding energy [38]. Intensive research in the synthesis of new metastable materials by HEBM was stimulated by the pioneering work in the amorphization of the Ni-Nb alloys conducted by Kock et al. in 1983 [39]. Since then, a wide spectrum of metastable materials has been produced, including nanocrystalline [40], nanocomposite [41], nanoporous phases [42], supersaturated solid solutions [43], and amorphous alloys [44]. These new phase transformations induced by HEBM are generally referred as mechanical alloying (MA). At the same time, it was found that at room temperature, HEBM can activate chemical reactions which are normally only possible at high temperatures [45]. This is called reactive milling or mechano-chemistry. Reactive ball milling has produced a large range of nanosized oxides [46], nitrides [47], hydrides [48], and carbide [49] particles.

The major differences between conventional ball milling and the HEBM are listed in the Table 1. The impact energy of HEBM is typically 1000 times higher than the conventional ball milling energy. The dominant events in the conventional ball milling are particle fracturing and size reductions, which correspond to, actually, only the first stage of the HEBM. A longer milling time is therefore generally required for HEBM. In addition to milling energy, the controls of milling atmosphere and temperature are crucial in order to create the desired structural changes or chemical reactions. This table shows that HEBM can cover most work normally performed by conventional ball milling, however, conventional ball milling equipment cannot be used to conduct any HEBM work.

Different types of high-energy ball mills have been developed, including the Spex vibrating mill, planetary ball mill, high-energy rotating mill, and attritors [50]. In the nanotube synthesis, two types of HEBM mills have been used: a vibrating ball mill and a rotating ball mill. The vibrating-frame grinder (Pulverisette O, Fritsch) is shown in Fig. 1a. This mill uses only one large ball (diameter of 50 mm) and the media of the ball and vial can be stainless steel or ceramic tungsten carbide (WC). The milling chamber, as illustrated in Fig. 1b, is sealed with an O-ring so that the atmosphere can be changed via a valve. The pressure is monitored with an attached gauge during milling.

where Mb is the mass of the milling ball, Vmax the maximum velocity of the vial,/the impact frequency, and Mp the mass of powder. The milling intensity is a very important parameter to MA and reactive ball milling. For example, a full amorphization of a crystalline NiZr alloy can only be achieved with a milling intensity above an intensity threshold of 510 ms2 [52]. The amorphization process during ball milling can be seen from the images of transmission electron microscopy (TEM) in Fig. 2a, which were taken from samples milled for different lengths of time. The TEM images show that the size and number of NiZr crystals decrease with increasing milling time, and a full amorphization is achieved after milling for 165 h. The corresponding diffraction patterns in Fig. 2b confirm this gradual amorphization process. However, when milling below the intensity threshold, a mixture of nanocrystalline and amorphous phases is produced. This intensity threshold depends on milling temperature and alloy composition [52].

Figure 2. (a) Dark-field TEM image of Ni10Zr7 alloy milled for 0.5, 23, 73, and 165 h in the vibrating ball mill with a milling intensity of 940 ms2. (b) Corresponding electron diffraction patterns [52].

Fig. 3 shows a rotating steel mill and a schematic representation of milling action inside the milling chamber. The mill has a rotating horizontal cell loaded with several hardened steel balls. As the cell rotates, the balls drop onto the powder that is being ground. An external magnet is placed close to the cell to increase milling energy [53]. Different milling actions and intensities can be realized by adjusting the cell rotation rate and magnet position.

The atmosphere inside the chamber can be controlled, and adequate gas has to be selected for different milling experiments. For example, during the ball milling of pure Zr powder in the atmosphere of ammonia (NH3), a series of chemical reactions occur between Zr and NH3 [54,55]. The X-ray diffraction (XRD) patterns in Fig. 4 show the following reaction sequence as a function of milling time:

The mechanism of a HEBM process is quite complicated. During the HEBM, material particles are repeatedly flattened, fractured, and welded. Every time two steel balls collide or one ball hits the chamber wall, they trap some particles between their surfaces. Such high-energy impacts severely deform the particles and create atomically fresh, new surfaces, as well as a high density of dislocations and other structural defects [44]. A high defect density induced by HEBM can accelerate the diffusion process [56]. Alternatively, the deformation and fracturing of particles causes continuous size reduction and can lead to reduction in diffusion distances. This can at least reduce the reaction temperatures significantly, even if the reactions do not occur at room temperature [57,58]. Since newly created surfaces are most often very reactive and readily oxidize in air, the HEBM has to be conducted in an inert atmosphere. It is now recognized that the HEBM, along with other non-equilibrium techniques such as rapid quenching, irradiation/ion-implantation, plasma processing, and gas deposition, can produce a series of metastable and nanostructured materials, which are usually difficult to prepare using melting or conventional powder metallurgy methods [59,60]. In the next section, detailed structural and morphological changes of graphite during HEBM will be presented.

Ball milling and ultrasonication were used to reduce the particle size and distribution. During ball milling the weight (grams) ratio of balls-to-clay particles was 100:2.5 and the milling operation was run for 24 hours. The effect of different types of balls on particle size reduction and narrowing particle size distribution was studied. The milled particles were dispersed in xylene to disaggregate the clumps. Again, ultrasonication was done on milled samples in xylene. An investigation on the amplitude (80% and 90%), pulsation rate (5 s on and 5 s off, 8 s on and 4 s off) and time (15 min, 1 h and 4 h) of the ultrasonication process was done with respect to particle size distribution and the optimum conditions in our laboratory were determined. A particle size analyzer was used to characterize the nanoparticles based on the principles of laser diffraction and morphological studies.

particle size analysis - an overview | sciencedirect topics

particle size analysis - an overview | sciencedirect topics

Particle size analysis is a complex procedure involving sampling, dispersion, and accurate use of instruments. Because any one of these operations can be inadequately performed, reference materials have been developed in recent times. These are distinguished by having the size distribution measured very carefully by a number of expert laboratories. The reference materials have a certified size distribution with error bands and can be used to test the methods used to obtain the size distribution. These materials are obtainable from the Bureau Commerce de Reference, and from the Society of Japanese Industry, and from the National Institute of Science and Technology.

The particle size analysis of JSC Mars-1 simulant dust measured using the Microtrac analyzer is shown in Figure 5.28. The d10, d50, d90 values were found to be 1.22 m, 9.06 m, and 38.45 m, respectively. The CMD of simulant dust using an ESPART analyzer was 3.66 m (standard deviation 0.19, n = 15). The charge distribution measurements were also performed using an ESPART analyzer and are shown in Figure 5.29.

The net charge-to-mass ratio (Q/M) of simulant dust measured with an ESPART analyzer was 2.7 C/g, as shown in Figure 5.30. This charge distribution was the result of (1) inter-particle charging; (2) tribocharging during grinding; and (3) handling processes.

FIGURE 5.30. The net charge-to-mass ratio in C/g of Mars dust simulant particles measured using an ESPART analyzer varied depending on the process conditions. The particles became charged primarily during the dispersion process

In the dispersion process the dust particles became charged. Some of the experiments were performed by deliberately charging the particles either with a net positive charge or with a net negative charge. When the dust was tribocharged against a Teflon surface, the particles were mostly charged positively and were negatively charged against a stainless steel surface. When the particles were neutralized, most of the particles showed charge close to zero. The performance of the screen was analyzed for dust with a net positive charge and a net negative charge, and for mostly neutral particles (Fig. 5.31). It was found that the efficiency of the screen did not deteriorate for neutral particles [2729].

A DRE of 85% was obtained for charged Mars dust simulant when excited by single-phase AC. The effect of particle size on DRE was also studied for a three-phase EDS system. A DRE of over 90% was achieved for an EDS with 1.27 mm electrode spacing (Fig. 5.31). Figure 5.32 shows the DRE with and without neutralization of the Mars dust simulant. The EDS system was found to be equally effective for charged and neutral particles.

FIGURE 5.32. Dust removal efficiency of a three-phase EDS with and without charge neutralizer (electrode spacing = 1.27 mm, electrode width = 0.127 mm, peak-to-peak voltage = 1250 V, f = 4 Hz, run time = 60 s, count median (aerodynamic) diameter of the dust particles = 3.66 m, d10 = 1.22 m, d50 = 9.06 m, d90 = 38.45 m)

The method of particle size analysis based on the measurement of ultrasound attenuation is referred to as acoustic spectroscopy. The acoustic spectrometer generates ultrasound pulses that undergo attenuation in disperse systems due to interaction with dispersed particles and the dispersion medium. The experimentally obtained value of attenuation can then be fitted to a particular theoretical model for different attenuation mechanisms, and the particle size can be evaluated from the fit, for instance by using eqs. (V.46 - V.49). Depending on the properties of particular disperse systems, different types of the acoustic energy losses may prevail. For example, in suspensions of mineral oxides the attenuation of acoustic signal occurs predominantly due to viscous losses, while in emulsions thermal and (at high frequencies) scattering losses prevail. In such cases it is possible to simplify the theoretical treatment by considering only the factors that make a significant contribution to the attenuation, while neglecting others.

An important advantage of the ultrasonic method of particle size analysis over other methods is its applicability to systems that are concentrated, electrically non-conductive and optically opaque. Equations (V.46 - V.49) indicate that attenuations due to different mechanisms of acoustic losses are proportional to the volume fraction of the dispersed phase. This dependance becomes critical for the evaluation of particle size in concentrated dispersions. A number of studies (see [26,27], and references therein) have showed that such proportionality does not hold over the entire range of volume fractions, which indicates that eqs. (V.46 - V.49) are not suitable for the characterization of concentrated disperse systems.

Figure V-38 shows the attenuation measured (normalized by frequency, =15MHz) as a function of weight and volume fractions for suspensions of rutile (TiO2) (a), and neoprene latex (b) particles. As one can see, the linear relationship between / and the volume fraction, , for rutile dispersion holds up to only ~ 10 %, and for latex particles up to ~ 32%. According to Dukhin [27], such a remarkable difference in the validity of eqs. (V.46) - (V.49), representing the ECAH theory (see Chapter V, 7), for latex and rutile dispersions can be explained by the fact that in the case of rutile viscous losses of acoustic energy play a predominant role, while in the case of latex thermal losses are prevalent. The decay of the amplitude of the ultrasonic signal in the case of thermal losses is determined by the thermal depth, while in the case of viscous losses by the viscous depth (see Chapter V,7). It was shown in [33] that for aqueous systems, visc /th=2.6, which means that the shear waves propagate into the dispersion medium to a greater extent than the thermal waves. At higher volume fractions of the dispersed phase the deviation of / versus dependence from linearity occurs due to the appearance of particle-particle interactions. This means that at certain volume fractions the decaying shear and thermal waves from a given particle start to interact with the boundary layers of the neighboring particles. Since shear waves propagate into the dispersion medium deeper than the thermal waves, the shear waves start to sense the presence of other particles at lower volume fractions than the thermal waves.

Fig. V-38. The experimentally measured attenuation to frequency ratio, / (=15MHz),as a function of weight and volume fractions of (a) - TiO2 (rutile), and (b) - neoprene latex suspensions in water. The frequency was kept constant at 15MHz. Volume fractions, , are shown by numbers over the data points.

The illustrated example clearly shows the limitations of ECAH theory for describing the attenuation in concentrated systems. A considerable effort has been made in recent years to alleviate the deficiencies of this theory and to develop theoretical relationships linking the attenuation with the particle size that take into account particle-particle interactions and can therefore be used for the analysis of concentrated disperse systems. In a number of studies [26,27,31,32] the expressions for attenuation coefficients, more complex than eqs.(V.46)-(V.49), suitable for the analysis of concentrated systems were derived. As in the case of electroacoustics, the modeling of particle-particle interactions was done by employing the appropriate cell models. The polydispersity of colloidal systems was also taken into account in these studies by solving the equations governing the propagation of ultrasonic waves in disperse systems for each particle size fraction individually. Good agreement between the particle size evaluated from attenuation measurements and measured by other methods was observed [27].

It is also worth mentioning that particle size can also be estimated from electroacoustic measurements of the phase angle of CVP. In this case measurements done at a single frequency yield mean particle size, while in the case of multiple frequencies particle size distribution can be obtained. Acoustic spectroscopy is, however, more practical for particle size analysis than for electroacoustic CVP measurement.

Among other things, particle size analysis permits the detection of the agent of deposition (e.g., wind, river, sea) and the environment of deposition such as beach, flood plain, and dune (Shackley 1975, p. 87). As mentioned above, well-sorted sand is often wind-deposited and certain agents of deposition have particular sediment signatures. Mineral identification of individual grains can suggest a likely source of the sediments, especially if the regional geology is well known. Grain shape (roundness and flatness) can indicate the transport agent, depositional environment, and history. Grains can be, for example, platy or flat; angular, or granular. Roundness refers to the general grain surface curvature and is more a function of mineral composition (certain minerals are harder or softer than others and respond to friction differently), depositional history, and final depositional environment (Shackley 1975, p. 46). Grain surface textures, examined under high-power magnification, can be: unworn and angular, suggesting a fresh or minimally-transported history; rounded and glossy, the result of transport by running water; or matt-surfaced grains, the result of wind transport. A good review of the analytical methods of sediment analysis can be found in Shackley (1975).

In analysis of aerosol particle size analysis, in order to obtain a true sample of airborne particles, one needs to use isokinetic sampling. In isokinetic sampling, the velocity of the air carrying the particle in the free stream is matched by the velocity of the air entering the inlet to the sampler. The effects of errors in sampling aerosols by improper sampling inlet velocities are illustrated in Fig. 1.11.

Isokinetic sampling: In isokinetic sampling, the inlet for the sampling probe is aimed coaxial to the free stream airflow and the air velocity entering the inlet is the same as the free stream velocity. In isokinetic air sampling, the particles follow the stream lines of the free air stream and the particle sampling inlet and no error in particle sampling occurs.

Super-isokinetic sampling: In super-isokinetic sampling, the inlet for the sampling probe is aimed coaxial to the free stream airflow but the air velocity entering the inlet is greater than the free stream velocity. In this case, small particles, which are able to follow the change in direction of the altered stream lines, are able to make the turn and are sampled by the inlet. Conversely, large particles, due to their greater inertia, cross the stream lines and are not sampled. As a result, the relative number of small particles is increased over that in the free stream, and the concentration of small particles, relative to large particles, is overestimated.

Sub-isokinetic sampling: In sub-isokinetic sampling, the inlet for the sampling probe is aimed coaxial to the free stream airflow but the air velocity entering the inlet is less than the free stream velocity. In this case, small particles, which are able to follow the change in direction of the altered stream lines, are able to make the turn and are not sampled by the inlet. Conversely, large particles, due to their greater inertia, cross the stream lines and are sampled. As a result, the relative number of large particles is increased over that in the free stream, and the concentration of large particles, relative to small particles, is overestimated.

The limited volume of water used during particle size analysis contributes to the uncertainty due to undersampling. As we will discuss later in this chapter, the concentration of marine particles greater than size D decreases typically as D3. Representative concentration of these particles at a diameter of 10 m is on the order of 10 particles/cm3. Thus, a sample of few cubic centimeters is likely to yield several tens of particles. However, at a diameter of 100 m, the particle concentration is about 10 (100/10)3 = 0.01 particles/cm3, and a single sample of several cubic centimeters is quite likely to yield 0 particles. This is directly supported by experimental evidence: it is difficult to catch a small fish with a 10-l Niskin-type water sampler, although one of us has done so once. That event's probability may have actually been strongly biased by the fish's curiosity because the mean concentration of particles with that size (ESD 2 104 m) is extremely low: 108 particles per cm3. The rarity of large particles in seawater prompted the sampling of very large water volumes with in situ filtration devices (e.g., Bishop et al. (1977)) in experiments aimed at the determination of the vertical particle flux in the ocean. The sampling time is sufficiently large in these cases to potentially cause substantial grazing by zooplankton and other feeders. Therefore very-large-volume samplers, as well as sediment traps, which are deployed for long periods frequently utilize means to prevent grazing and other disturbances to the samples collected on the filters.

Sampling insufficient volumes at low particle concentration may introduce artifacts into the size distribution if the sampled volume cannot be adjusted on demand, for example, in the image analysis of particles deposited by filtration on a membrane filter, or image of a fixed sample volume taken with in situ imaging. Let D = (D + D) D be a size range. According to the definition of the size distribution, we have

where N is the average number concentration of particles with sizes in a range D. In practice, n (D) is sometimes taken to be the value of N/D itself. Assume that the number concentration of particles is so small, that when we repeatedly sample a volume VS we obtain either 0 or 1 particle. By sampling r volumes of water, we would count M particles:

The minimum acceptable value of M is of course unity. Thus, we must examine a series of volumes VS until we find at least one particle. The number concentration of particles, N, in a size range D is then expressed as follows:

Now, let a particle size grid D0, D0, Di+1 be defined by the condition expressed in equation (5.13), and some arbitrary starting value of D0. As already stated, such a grid is frequently used in natural water research because it provides roughly the same order of magnitude of particle volume within each size interval. Thus, Di = (Di+1 Di) = Di(a 1), and

where a is a constant, and the unity in the numerator stands for the smallest acceptable value of the total particle count M. The variable ri presumably increases with i, because for a typical PSD in natural waters, we need to examine more volumes, VS, of water as the number of particles decreases with increasing particle size in order to keep the counting precision at a given level. Let us check whether this increase of the sampling volume on demand preserves the size distribution shape for a power-law size distribution n (D) = kDm. We have

i.e., the slope of the size distribution is correct. The value of the scale factor is correct only in the limit of a 1, i.e., the width of the size interval Di 0 [use lima1(am+1 1)/(a 1) = m + 1]. Thus, in the variable-sampling-volume approach, applicable to particle sizing methods that permit real-time adjustments to the total sampled volume, there is no minimum-concentration limit.

However, the minimum-concentration limit applies to a situation where a fixed sample volume is analyzed, such as in image analysis of particles on membrane filters, or in situ microphotography. In that case, we must replace riVS in Eq. (5.60) by a constant. Thus, the minimum-concentration size distribution estimate is expressed as

where VF is the fixed volume of water analyzed. Such a size distribution is a power-law distribution with a slope of 1. The evidence for approaching such minimum-concentration size distributions was presented by Jackson et al. (1997).

Incidentally, equation (5.62) enables us to assess the quality of approximation of n (D) by histogram h (Di, Di) = Ni/Di [Eq. (5.4)]. We already know that the slope of that approximation is correct. However, the magnitude factor, k, of that estimate is not correct, as can be seen from the following equation, derived from (5.62) by setting Ni/Di = k'Dim:

Consider typical values: m = 4 and a = 21/3. Then k' 0.64k. This scale factor k is modified because we assign the Ni/Di value to a particle size Di which is the lower bound of interval Di, while that value really corresponds to some particle size within that interval. If we knew the slope of the size distribution, we could calculate the correct particle size, Di. Indeed, by comparing k'Dim = kDi,correctm, the correct particle size can be calculated as follows:

One final critical factor in drug substance preformulation analysis, particle size, should be discussed briefly. The particle size of a drug substance can have a significant impact on a number of factors in our drug product. Larger particles sized material tends to have slower intrinsic dissolution rate. This slower intrinsic dissolution rate can significantly slow the rate of dissolution of the API from a final dosage form. This slower dissolution rate could also have an impact on the bioavailability of the drug product. Particle size and morphology can also have an impact on critical drug product processing parameters.23

It is clear that the particle size should be at least controlled for consistency, if not optimized for both processing efficiency and rapid dissolution of the drug substance. These controls can take two general forms. The particle size can be controlled by the synthetic process, typically with the parameters of the final recrystallization step. The other method is by means of milling the material to a certain particle size distribution.

This particle size control methodology should be monitored with an appropriate analytical technique. This is typically done with either a laser scattering based approach, by means of a sieve analysis of the milled material, or by means of a microscopic examination. It should be noted that each technique can result in a very different measured particle size distribution, because of the different fundamental properties that are observed by each technique. The differences of each technique are outlined elsewhere.

Numerous products within the food and beverage industry require particle size analysis in order to ensure the consistency of the final product. Particle size is a critical characteristic as it influences stability (in the case of emulsions such as high-concentration flavor additives), mouth-feel, and flavor. A number of different techniques are used for the analysis of food colloids, ranging from rheology to ultrasonic spectroscopy and DLS. All of these techniques provide valuable information about the system they measure; however, they all require some kind of sample manipulation prior to measurement. In the late 1980s, a new light scattering method was proposed called diffusing wave spectroscopy. This new technique is similar to DLS in that it measures the time-dependent fluctuations in the intensity of scattered light and inversely relates this to the diffusion coefficient of the dispersed particles. Where this method differs from DLS is in the capability to gain information on the size of submicrometer particles at very high concentrations, where the sample is in the multiple scattering regime. DWS exploits the fact that, in the multiple scattering regime, the transport of light through the sample can be treated as diffusive. While treating the scattered light in this way, it is important to be aware that the functions that describe the correlation of the time-dependent fluctuations in the scattered intensities are dependent on the geometry of the experimental setup.

The technique is still not fully established in the food industry; however, numerous studies, of undiluted milk, have been made using DWS to measure the size of casein micelles and to investigate the process of gelation. DWS has also been used to study the effect of various polysaccharides on the properties of oil-in-water emulsions and detect structural differences at the micrometer level during destabilization experiments.

specific heat test experiment - the proper method | thermtest inc

specific heat test experiment - the proper method | thermtest inc

When performing an experiment and carefully following the proper procedure methods, the results obtained should be relatively accurate. Many experiments require multiple trials and sometimes never result in a complete conclusion.

The specific heat test described in this article required many modifications and repetitions to display conclusive results. The results obtained were relatively accurate even taking into consideration the modification of using a calorimeter made of Styrofoam cups. Often results from experiments using modifications can be inaccurate or inconclusive, especially when there is only a single recorded data point. In this article we will go through proper data acquisition and analysis.

When using proper procedure for this in-home experiment, it is unlikely that 100% accuracy will be achieved due to the unstoppable nature of thermodynamics. Following this method can give an accuracy of approximately 80%. This is a respectable degree of accuracy to be achieved from a simple DIY application.

A mass of water was measured than poured into the calorimeter, the water remained there until it reached room temperature. For the most accurate and readable results, there should only be enough water in the calorimeter to completely cover the sample.

A beaker filled with approximately 300mL of water was placed on a hot plate. A sample of the stainless-steel variety used for the purpose of this experiment was then placed in a test tube and set in a stand so that the majority of the tube was submerged in the beaker. The sample tube was placed vertically to ensure that it did not come into contact with the bottom or sides of the beaker. Once the sample tube was placed properly; the hot plate was turned on.

During the initial trial of this experiment a solid 25.22-gram cylindrical stainless-steel-316 sample was used. Using a test tube to hold the sample in the water (as shown above), did not lead to favorable results. This poor experimental outcome is likely due to the samples radius being smaller than the radius of the test tube. This additional space around the sample added a layer of insulating air in the test tube. This error could be averted if the sample were a powder or composed of smaller pieces.

However, if changes were made to the sample at this point in the experiment, the sample would be completely destroyed. For the remainder of the experiment the sample was held in place with a pair of insulated tongs instead of the test tube. Altering this element of the procedure for the remainder of the trials dramatically improved the results.

Once the water in the beaker was boiling, it was recommended to wait approximately 10 minutes to ensure the sample is evenly heated. Note that to achieve a boiling condition the sample needs to be heated to 100 degrees Celsius.

After 10 minutes the test tube was detached from the stand and the sample was poured into the calorimeter. The sample must be poured safely but quickly into the calorimeter, so that a minimal amount of heat is lost to the surrounding air. It is also crucial not to transfer any water from the beaker into the calorimeter. If water from the beaker entered the calorimeter in would add heat that is unrelated to the sample and alter the calculated mass of water.

A Styrofoam cup calorimeter was used for the first two trials of this experiment. For the third and fourth trials a metal thermos replaced the Styrofoam cups to determine the difference a conductive material had on the results.

Once the sample was stabilized in the enclosed calorimeter, data was collected from the thermometer. For the purposes of this experiment, data was recorded at every 30 second interval. However, this experiment is not time sensitive so depending on the conditions, data can be reordered at multiple different intervals. Data was continuously recorded until the temperature of the substance started to drop indicating that the highest possible temperature was achieved.

Recording temperatures at intervals is the recommended option compared to just waiting for the temperature peak. Having a consistent recording time ensures the most accurate results and minimalizes chances for error.

The data obtained from the temperature analysis was entered into Microsoft excel (any program would suffice). Using Microsoft excel the data was placed into a graph format for easy comparison between trials. The graphs produced from the data would indicate any errors that could have possibly occurred during the experimental trials. If errors are observed the experiment would have to be repeated before continuing further analysis.

Acceptable data will display a noticeable curve in the graph indicating the relationship between temperature and time. Additional modifications that could be implemented to improve results could be breaking the sample into smaller pieces, having a larger mass of sample or having less water in the calorimeter. All of these would impact the deviation in the temperature difference displaying more drastic changes in temperature. Increased variation would make the results more noticeable and the data calculations would also be more accurate.

The time that the peak temperature occurred is displayed noticeably in the graphs of the experimental data. Depending on the compatibility of the data with the software being used for the analysis a line of best fit can be produced from the peak temperature. Errors bars are optional depending on the purpose of the experiment and the results obtained. The majority of the error in the results would be due to the unpreventable loss of heat to the air and the calorimeter.

Figure 8: Graph displaying the relationship between Temperature (C) and Time (minutes) during the fourth trial using a thermos at room temperature as the calorimeter, tongs held the sample while heated.

The first law of thermal dynamics states that all energy in the universe is conserved. This law can be applied to the results from the experiment. When the sample cools down, energy is lost in the form of heat. As the law states, this energy doesnt just disappear, in the case of this experiment that energy is absorbed by the water. This increase in energy will cause the waters temperature to rise. Measuring this change determines the amount of heat absorbed by the water which leads to determining the heat lost by the stainless-steel sample.

For the first trial of the experiment, Styrofoam cups were used as the calorimeter along with a test tube for holding the sample. The results from this trial were inconclusive and had a large amount of error. In the second trial a pair of insulated tongs replaced the test tube for holding the sample. The results from this trial displayed substantially less error. Modifications were made for the second trial in an attempt to produce viable results.

In the second trial, the water in the calorimeter rested longer at room temperature before the data was collected. This modification produced a result of 305.4 Joules per kilogram kelvin, which was 67.9% accurate to the theoretical value of grade 316 stainless steel, 470.

Another modification was used for the third and fourth trial to determine the change in accuracy when using a conductive calorimeter. A standard beverage thermos was used as the calorimeter for these trials. The thermometer used to measure temperature was upgraded to a digital one with long wire probes that enabled the thermos to have a tight seal. The first trial using this modification produced a result of 553.29 Joules per kilogram kelvin, which is 84.9% accurate, however, it was above the theoretical value. Accuracy above theoretical value indicated that an error was made.

Using a thermos as the calorimeter means that the water would take longer to reach room temperature. It is debatable if the water inside the thermos not being at room temperature would affect the results of the experiment. However, after further analysis the size of the thermos was determined as the cause of error in accuracy values. Due to the large size of the thermos, additional spaced remained inside the container for room temperature air to be trapped when the lid was placed on. This large volume of air was in direct contact with the water inside the thermos and affected the waters temperature.

A further modification was made for the fourth trial of this experiment to limit error caused by the room temperature air. Only enough water to cover the sample was added and it was stressed that this water was at room temperature. This modification received results of 367.56 Joules per kilogram kelvin, which is 78.2% accurate.

An accurate result was not the goal of these experiments. Science is a slow process that is full of opportunity for mistakes, trial and error, and improvement. Scientific theories take years to develop and even longer for the scientific community to accept them. Science is an art and should be played with. Most earth-shattering discoveries were made by accident or with a different end goal in mind. For experiments such as the one described above, it is good practice to aim for an 80% accuracy for an at home value. When experimenting, it is key to have an open mind, roll with punches and most of all be safe.

Theory of Heat Maxwell, James Clerk page 57-67 Westport, Conn., Greenwood Press 1970: https://archive.org/details/theoryheat04maxwgoog/page/n77 -Talks about conservation of heat, the form, function and a bit of history of calorimeters, and the Method of Mixture. It is also a good book to understand heat in general, and its free.

shimadzu corporation

shimadzu corporation

This website introduces our company's environmental contribution activities and initiatives. Shimadzu supplies analytical/measuring instruments and industrial machinery to solve environmental problems and to support renewable energy development.

For over 100 years of history in x-ray, we have pursued our passion for technology to develop solutions to lower dose, make workflow simpler and improve the patient experience with meaningful innovation.

This article highlights a unique neuroscience research project at University College London that uses fNIRS to measure the brain activity patterns of Shakespeare actors performing the same scenes multiple times. The goal is to obtain a new understanding about human social cognition and how social interactions might be different for individuals with autism.

Shimadzu is working to contribute to society through science and technology. From food safety to personal health, from improving the environment to developing industry, we are devising answers to the diverse challenges in society.

This website highlights Shimadzu's focus to advance global collaborations on research & development for unique healthcare applications. Shimadzu Advanced Healthcare describes the synergistic combination of Shimadzus core technologies in analytical science and medical diagnostic imaging for healthcare applications including disease prevention, diagnosis, treatment, and prognosis as well as drug discovery.

electronic balance use precautions - knowledge - zhengzhou nanbei instrument equipment co.,ltd

electronic balance use precautions - knowledge - zhengzhou nanbei instrument equipment co.,ltd

Electronic balances are used to measure the quality of objects and are widely used in companies and laboratories. The utility model has the advantages of simple structure, convenient and practical, and fast weighing speed. At present, there are many kinds of electronic balances used in China, whether domestic or imported; whether it is large or small, the basic construction principle is the same whether it is high precision or low precision. . How to properly install, use and maintain an electronic balance and obtain the correct weighing results is one of the effective ways to ensure product quality. In the on-site verification work, it was found that many companies' balances were not installed, used and maintained as required, resulting in large deviations in measurement data, exceeding the maximum allowable error required by the verification procedures. In order to enable the staff engaged in the use of the balance to obtain accurate weighing results and extend the service life of the balance, the problems that need to be paid attention to when properly installing, using and maintaining the electronic balance are as follows:

(2) Dust removal should be done with wet silk cloth, etc., in the weighing chamber or near the magnetic steel. Do not let dust and dirt fall into the magnetic steel to cause the balance to malfunction.

The products are excellent in performance, fast and thoughtful. After-sales service, the price of the best value for money, sold throughout the provinces and cities, and exported to the rest of the world. We adhere to the policy of innovation, professionalism and quality.

the working principle of hammer mills (step-by-step guide)

the working principle of hammer mills (step-by-step guide)

SaintyCo hammer mills are high precision machines for grinding solid and hard granules. Our hammer mills guarantee uniform grinding, noiseless operation and less heat buildup in all pharmaceutical processes.

Whether you need standard or customized hammer mills, SaintyCo offers many series for specialized shredding applications. The cGMP compliance and innovative design make SaintyCo hammer mills the most sough-after in this industry.

Every part/component you see in the image above plays an integral role in the overall working principle of hammer mills. However, the milling process mainly takes place in the crushing chamber (part 3).

Hammer mills crushing tools may be coupled directly to a motor or driven by a belt. As opposed to direct connection, the belts can cushion the motor from shock and allows for accurate speed adjustment.

In case youre new to hammer mills in pharmaceutical and food processing industries, here are three crucial steps that will help you understand how this equipment works. Before that, you can watch this video to see how hammer mills work:

Basically, within this chamber, the material is hit by a repeated combination of knives/hammer impact and collision with the wall of the milling chamber. Moreover, collision between particles to particles play an instrumental role in this size reduction process.

In most cases, the mechanical process of reducing large size particles into small particle may result in a fine or coarse finish. How is then is this possible when you use the same pharmaceutical hammer mill equipment?

scholarassignments - best custom writing services

scholarassignments - best custom writing services

We always make sure that writers follow all your instructions precisely. You can choose your academic level: high school, college/university, master's or pHD, and we will assign you a writer who can satisfactorily meet your professor's expectations.

Each paper is composed from scratch to meet your assignment instructions. We then use a plagiarism-detection software to ensure that it is, actually, completely plagiarism free. We ensure that there is no way you could find your paper plagiarized.

Place an order on our website is very easy and will only take a few minutes of your time. Click on the order now button to visit the order page. Fill the order form with your assignment instructions ensuring all important information about your order is included. Include your contact information so we can reach you if there are issues with your order that need clarification. After placing your order by submitting your assignment instructions, make payments. Once payment has been made in full, your order will be assigned to the most qualified writer who majors in your subject. The writer does in-depth research and writes your paper to produce high-quality content. The order passes through our editing department after which it is delivered to you.

The best way to upload files is by using the additional materials box. Drop all the files you want your writer to use in processing your order. If you forget to attach the files when filling the order form, you can upload them by clicking on the files button on your personal order page. The files should be uploaded as soon as possible to give the writer time to review and use them in processing your order.

Sure. Our writing company offers a fast service with an 8-hour deadline for orders up to masters level. Make sure to specify the deadline in the order form and our writers will write a paper within the indicated timeslot. Just proceed to submit your requirements here.

Once you order a custom written essay, our managers will assign your order to the most well-suited writer, who has the best skills and experience for preparing your specific assignment. You can also request one of these extra features:

You can be sure that your custom writing order will be accomplished by one of our 400+ professional academic writers. They all pass a series of tests to prove their writing prowess and hold the reputation of being the most professional in the industry.

The average quality score at our professional custom essay writing service is 8.5 out of 10. The high satisfaction rate is set by our Quality Control Department, which checks all papers before submission. The final check includes:

Related Equipments