Thermodynamic temperature: Difference between revisions
Rm pointless template. |
→Definition of thermodynamic temperature: changed heat sync to heat sink |
||
Line 251: | Line 251: | ||
Strictly speaking, the temperature of a system is well-defined only if it is at [[thermal equilibrium]]. From a microscopic viewpoint, a material is at thermal equilibrium if the quantity of heat between its individual particles cancel out. There are many possible scales of temperature, derived from a variety of observations of physical phenomena. |
Strictly speaking, the temperature of a system is well-defined only if it is at [[thermal equilibrium]]. From a microscopic viewpoint, a material is at thermal equilibrium if the quantity of heat between its individual particles cancel out. There are many possible scales of temperature, derived from a variety of observations of physical phenomena. |
||
Loosely stated, temperature differences dictate the direction of heat between two systems such that their combined energy is maximally distributed among their lowest possible states. We call this distribution "[[entropy]]". To better understand the relationship between temperature and entropy, consider the relationship between heat, [[mechanical work|work]] and temperature illustrated in the [[Carnot cycle|Carnot heat engine]]. The engine converts heat into work by directing a temperature gradient between a higher temperature heat source, ''T''<sub>H</sub>, and a lower temperature heat |
Loosely stated, temperature differences dictate the direction of heat between two systems such that their combined energy is maximally distributed among their lowest possible states. We call this distribution "[[entropy]]". To better understand the relationship between temperature and entropy, consider the relationship between heat, [[mechanical work|work]] and temperature illustrated in the [[Carnot cycle|Carnot heat engine]]. The engine converts heat into work by directing a temperature gradient between a higher temperature heat source, ''T''<sub>H</sub>, and a lower temperature heat sink, ''T''<sub>C</sub>, through a gas filled piston. The work done per cycle is equal to the difference between the heat supplied to the engine by ''T''<sub>H</sub>, ''q''<sub>H</sub>, and the heat supplied to ''T''<sub>C</sub> by the engine, ''q''<sub>C</sub>. The '''efficiency''' of the engine is the work divided by the heat put into the system or |
||
: <math>\textrm{Efficiency} = \frac {w_{cy}}{q_H} = \frac{q_H-q_C}{q_H} = 1 - \frac{q_C}{q_H} \qquad (1)</math> |
: <math>\textrm{Efficiency} = \frac {w_{cy}}{q_H} = \frac{q_H-q_C}{q_H} = 1 - \frac{q_C}{q_H} \qquad (1)</math> |
Revision as of 15:49, 27 June 2019
Thermodynamics |
---|
Thermodynamic temperature is the absolute measure of temperature and is one of the principal parameters of thermodynamics.
Thermodynamic temperature is defined by the third law of thermodynamics in which the theoretically lowest temperature is the null or zero point. At this point, absolute zero, the particle constituents of matter have minimal motion and can become no colder.[1][2] In the quantum-mechanical description, matter at absolute zero is in its ground state, which is its state of lowest energy. Thermodynamic temperature is often also called absolute temperature, for two reasons: one, proposed by Kelvin, that it does not depend on the properties of a particular material; two that it refers to an absolute zero according to the properties of the ideal gas.
The International System of Units specifies a particular scale for thermodynamic temperature. It uses the kelvin scale for measurement and selects the triple point of water at 273.16 K as the fundamental fixing point. Other scales have been in use historically. The Rankine scale, using the degree Fahrenheit as its unit interval, is still in use as part of the English Engineering Units in the United States in some engineering fields. ITS-90 gives a practical means of estimating the thermodynamic temperature to a very high degree of accuracy.
Roughly, the temperature of a body at rest is a measure of the mean of the energy of the translational, vibrational and rotational motions of matter's particle constituents, such as molecules, atoms, and subatomic particles. The full variety of these kinetic motions, along with potential energies of particles, and also occasionally certain other types of particle energy in equilibrium with these, make up the total internal energy of a substance. Internal energy is loosely called the heat energy or thermal energy in conditions when no work is done upon the substance by its surroundings, or by the substance upon the surroundings. Internal energy may be stored in a number of ways within a substance, each way constituting a "degree of freedom". At equilibrium, each degree of freedom will have on average the same energy: where is the Boltzmann constant, unless that degree of freedom is in the quantum regime. The internal degrees of freedom (rotation, vibration, etc.) may be in the quantum regime at room temperature, but the translational degrees of freedom will be in the classical regime except at extremely low temperatures (fractions of kelvins) and it may be said that, for most situations, the thermodynamic temperature is specified by the average translational kinetic energy of the particles.
Overview
Temperature is a measure of the random submicroscopic motions and vibrations of the particle constituents of matter. These motions comprise the internal energy of a substance. More specifically, the thermodynamic temperature of any bulk quantity of matter is the measure of the average kinetic energy per classical (i.e., non-quantum) degree of freedom of its constituent particles. "Translational motions" are almost always in the classical regime. Translational motions are ordinary, whole-body movements in three-dimensional space in which particles move about and exchange energy in collisions. Figure 1 below shows translational motion in gases; Figure 4 below shows translational motion in solids. Thermodynamic temperature's null point, absolute zero, is the temperature at which the particle constituents of matter are as close as possible to complete rest; that is, they have minimal motion, retaining only quantum mechanical motion.[3] Zero kinetic energy remains in a substance at absolute zero (see Thermal energy at absolute zero, below).
Throughout the scientific world where measurements are made in SI units, thermodynamic temperature is measured in kelvins (symbol: K). Many engineering fields in the U.S. however, measure thermodynamic temperature using the Rankine scale.
By international agreement, the unit kelvin and its scale are defined by two points: absolute zero, and the triple point of Vienna Standard Mean Ocean Water (water with a specified blend of hydrogen and oxygen isotopes). Absolute zero, the lowest possible temperature, is defined as being precisely 0 K and −273.15 °C. The triple point of water is defined as being precisely 273.16 K and 0.01 °C. This definition does three things:
- It fixes the magnitude of the kelvin unit as being precisely 1 part in 273.16 parts the difference between absolute zero and the triple point of water;
- It establishes that one kelvin has precisely the same magnitude as a one-degree increment on the Celsius scale; and
- It establishes the difference between the two scales' null points as being precisely 273.15 kelvins (0 K = −273.15 °C and 273.16 K = 0.01 °C).
Temperatures expressed in kelvins (TK) are converted to degrees Rankine (T°R) simply by multiplying by 1.8 (T°R = 1.8 × TK). Temperatures expressed in degrees Rankine are converted to kelvins by dividing by 1.8 (TK = T°R ÷ 1.8).
Practical realization
Although the kelvin and Celsius scales are defined using absolute zero (0 K) and the triple point of water (273.16 K and 0.01 °C), it is impractical to use this definition at temperatures that are very different from the triple point of water. ITS-90 is then designed to represent the thermodynamic temperature as closely as possible throughout its range. Many different thermometer designs are required to cover the entire range. These include helium vapor pressure thermometers, helium gas thermometers, standard platinum resistance thermometers (known as SPRTs, PRTs or Platinum RTDs) and monochromatic radiation thermometers.
For some types of thermometer the relationship between the property observed (e.g., length of a mercury column) and temperature, is close to linear, so for most purposes a linear scale is sufficient, without point-by-point calibration. For others a calibration curve or equation is required. The mercury thermometer, invented before the thermodynamic temperature was understood, originally defined the temperature scale; its linearity made readings correlate well with true temperature, i.e. the "mercury" temperature scale was a close fit to the true scale.
The relationship of temperature, motions, conduction, and thermal energy
The nature of kinetic energy, translational motion, and temperature
The thermodynamic temperature is a measure of the average energy of the translational, vibrational and rotational motions of matter's particle constituents (molecules, atoms, and subatomic particles). The full variety of these kinetic motions, along with potential energies of particles, and also occasionally certain other types of particle energy in equilibrium with these, contribute the total internal energy (loosely, the thermal energy) of a substance. Thus, internal energy may be stored in a number of ways (degrees of freedom) within a substance. When the degrees of freedom are in the classical regime ("unfrozen") the temperature is very simply related to the average energy of those degrees of freedom at equilibrium. The three translational degrees of freedom are unfrozen except at the very lowest temperatures, and their kinetic energy is simply related to the thermodynamic temperature over the widest range. The heat capacity, which relates heat input and temperature change, is discussed below.
The relationship of kinetic energy, mass, and velocity is given by the formula Ek = 1⁄2mv2.[4] Accordingly, particles with one unit of mass moving at one unit of velocity have precisely the same kinetic energy, and precisely the same temperature, as those with four times the mass but half the velocity.
Except in the quantum regime at extremely low temperatures, the thermodynamic temperature of any bulk quantity of a substance (a statistically significant quantity of particles) is directly proportional to the mean average kinetic energy of a specific kind of particle motion known as translational motion. These simple movements in the three x, y, and z–axis dimensions of space means the particles move in the three spatial degrees of freedom. The temperature derived from this translational kinetic energy is sometimes referred to as kinetic temperature and is equal to the thermodynamic temperature over a very wide range of temperatures. Since there are three translational degrees of freedom (e.g., motion along the x, y, and z axes), the translational kinetic energy is related to the kinetic temperature by:
where:
- is the mean kinetic energy in joules (J) and is pronounced “E bar”
- kB = 1.3806504(24)×10−23 J/K is the Boltzmann constant and is pronounced “Kay sub bee”
- is the kinetic temperature in kelvins (K) and is pronounced “Tee sub kay”
While the Boltzmann constant is useful for finding the mean kinetic energy of a particle, it's important to note that even when a substance is isolated and in thermodynamic equilibrium (all parts are at a uniform temperature and no heat is going into or out of it), the translational motions of individual atoms and molecules occur across a wide range of speeds (see animation in Figure 1 above). At any one instant, the proportion of particles moving at a given speed within this range is determined by probability as described by the Maxwell–Boltzmann distribution. The graph shown here in Fig. 2 shows the speed distribution of 5500 K helium atoms. They have a most probable speed of 4.780 km/s. However, a certain proportion of atoms at any given instant are moving faster while others are moving relatively slowly; some are momentarily at a virtual standstill (off the x–axis to the right). This graph uses inverse speed for its x–axis so the shape of the curve can easily be compared to the curves in Figure 5 below. In both graphs, zero on the x–axis represents infinite temperature. Additionally, the x and y–axis on both graphs are scaled proportionally.
The high speeds of translational motion
Although very specialized laboratory equipment is required to directly detect translational motions, the resultant collisions by atoms or molecules with small particles suspended in a fluid produces Brownian motion that can be seen with an ordinary microscope. The translational motions of elementary particles are very fast[5] and temperatures close to absolute zero are required to directly observe them. For instance, when scientists at the NIST achieved a record-setting cold temperature of 700 nK (billionths of a kelvin) in 1994, they used optical lattice laser equipment to adiabatically cool caesium atoms. They then turned off the entrapment lasers and directly measured atom velocities of 7 mm per second in order to calculate their temperature.[6] Formulas for calculating the velocity and speed of translational motion are given in the following footnote.[7]
Because of their internal structure and flexibility, molecules can store kinetic energy in internal degrees of freedom which contribute to the heat capacity.
There are other forms of internal energy besides the kinetic energy of translational motion. As can be seen in the animation at right, molecules are complex objects; they are a population of atoms and thermal agitation can strain their internal chemical bonds in three different ways: via rotation, bond length, and bond angle movements. These are all types of internal degrees of freedom. This makes molecules distinct from monatomic substances (consisting of individual atoms) like the noble gases helium and argon, which have only the three translational degrees of freedom. Kinetic energy is stored in molecules' internal degrees of freedom, which gives them an internal temperature. Even though these motions are called internal, the external portions of molecules still move—rather like the jiggling of a stationary water balloon. This permits the two-way exchange of kinetic energy between internal motions and translational motions with each molecular collision. Accordingly, as energy is removed from molecules, both their kinetic temperature (the temperature derived from the kinetic energy of translational motion) and their internal temperature simultaneously diminish in equal proportions. This phenomenon is described by the equipartition theorem, which states that for any bulk quantity of a substance in equilibrium, the kinetic energy of particle motion is evenly distributed among all the active (i.e. unfrozen) degrees of freedom available to the particles. Since the internal temperature of molecules is usually equal to their kinetic temperature, the distinction is usually of interest only in the detailed study of non-local thermodynamic equilibrium (LTE) phenomena such as combustion, the sublimation of solids, and the diffusion of hot gases in a partial vacuum.
The kinetic energy stored internally in molecules causes substances to contain more internal energy at any given temperature and to absorb additional internal energy for a given temperature increase. This is because any kinetic energy that is, at a given instant, bound in internal motions is not at that same instant contributing to the molecules' translational motions.[8] This extra thermal energy simply increases the amount of energy a substance absorbs for a given temperature rise. This property is known as a substance's specific heat capacity.
Different molecules absorb different amounts of thermal energy for each incremental increase in temperature; that is, they have different specific heat capacities. High specific heat capacity arises, in part, because certain substances' molecules possess more internal degrees of freedom than others do. For instance, nitrogen, which is a diatomic molecule, has five active degrees of freedom at room temperature: the three comprising translational motion plus two rotational degrees of freedom internally. Since the two internal degrees of freedom are essentially unfrozen, in accordance with the equipartition theorem, nitrogen has five-thirds the specific heat capacity per mole (a specific number of molecules) as do the monatomic gases.[9] Another example is gasoline (see table showing its specific heat capacity). Gasoline can absorb a large amount of thermal energy per mole with only a modest temperature change because each molecule comprises an average of 21 atoms and therefore has many internal degrees of freedom. Even larger, more complex molecules can have dozens of internal degrees of freedom.
The diffusion of thermal energy: Entropy, phonons, and mobile conduction electrons
Heat conduction is the diffusion of thermal energy from hot parts of a system to cold. A system can be either a single bulk entity or a plurality of discrete bulk entities. The term bulk in this context means a statistically significant quantity of particles (which can be a microscopic amount). Whenever thermal energy diffuses within an isolated system, temperature differences within the system decrease (and entropy increases).
One particular heat conduction mechanism occurs when translational motion, the particle motion underlying temperature, transfers momentum from particle to particle in collisions. In gases, these translational motions are of the nature shown above in Fig. 1. As can be seen in that animation, not only does momentum (heat) diffuse throughout the volume of the gas through serial collisions, but entire molecules or atoms can move forward into new territory, bringing their kinetic energy with them. Consequently, temperature differences equalize throughout gases very quickly—especially for light atoms or molecules; convection speeds this process even more.[10]
Translational motion in solids, however, takes the form of phonons (see Fig. 4 at right). Phonons are constrained, quantized wave packets that travel at a given substance's speed of sound. The manner in which phonons interact within a solid determines a variety of its properties, including its thermal conductivity. In electrically insulating solids, phonon-based heat conduction is usually inefficient[11] and such solids are considered thermal insulators (such as glass, plastic, rubber, ceramic, and rock). This is because in solids, atoms and molecules are locked into place relative to their neighbors and are not free to roam.
Metals however, are not restricted to only phonon-based heat conduction. Thermal energy conducts through metals extraordinarily quickly because instead of direct molecule-to-molecule collisions, the vast majority of thermal energy is mediated via very light, mobile conduction electrons. This is why there is a near-perfect correlation between metals' thermal conductivity and their electrical conductivity.[12] Conduction electrons imbue metals with their extraordinary conductivity because they are delocalized (i.e., not tied to a specific atom) and behave rather like a sort of quantum gas due to the effects of zero-point energy (for more on ZPE, see Note 1 below). Furthermore, electrons are relatively light with a rest mass only 1⁄1836th that of a proton. This is about the same ratio as a .22 Short bullet (29 grains or 1.88 g) compared to the rifle that shoots it. As Isaac Newton wrote with his third law of motion,
Law #3: All forces occur in pairs, and these two forces are equal in magnitude and opposite in direction.
However, a bullet accelerates faster than a rifle given an equal force. Since kinetic energy increases as the square of velocity, nearly all the kinetic energy goes into the bullet, not the rifle, even though both experience the same force from the expanding propellant gases. In the same manner, because they are much less massive, thermal energy is readily borne by mobile conduction electrons. Additionally, because they're delocalized and very fast, kinetic thermal energy conducts extremely quickly through metals with abundant conduction electrons.
The diffusion of thermal energy: Black-body radiation
Thermal radiation is a byproduct of the collisions arising from various vibrational motions of atoms. These collisions cause the electrons of the atoms to emit thermal photons (known as black-body radiation). Photons are emitted anytime an electric charge is accelerated (as happens when electron clouds of two atoms collide). Even individual molecules with internal temperatures greater than absolute zero also emit black-body radiation from their atoms. In any bulk quantity of a substance at equilibrium, black-body photons are emitted across a range of wavelengths in a spectrum that has a bell curve-like shape called a Planck curve (see graph in Fig. 5 at right). The top of a Planck curve (the peak emittance wavelength) is located in a particular part of the electromagnetic spectrum depending on the temperature of the black-body. Substances at extreme cryogenic temperatures emit at long radio wavelengths whereas extremely hot temperatures produce short gamma rays (see Table of common temperatures).
Black-body radiation diffuses thermal energy throughout a substance as the photons are absorbed by neighboring atoms, transferring momentum in the process. Black-body photons also easily escape from a substance and can be absorbed by the ambient environment; kinetic energy is lost in the process.
As established by the Stefan–Boltzmann law, the intensity of black-body radiation increases as the fourth power of absolute temperature. Thus, a black-body at 824 K (just short of glowing dull red) emits 60 times the radiant power as it does at 296 K (room temperature). This is why one can so easily feel the radiant heat from hot objects at a distance. At higher temperatures, such as those found in an incandescent lamp, black-body radiation can be the principal mechanism by which thermal energy escapes a system.
Table of thermodynamic temperatures
The full range of the thermodynamic temperature scale, from absolute zero to absolute hot, and some notable points between them are shown in the table below.
kelvin | Peak emittance wavelength[13] of black-body photons | |
Absolute zero (precisely by definition) |
0 K | ∞ [3] |
Coldest measured temperature [14] |
450 pK | 6,400 kilometers |
One millikelvin (precisely by definition) |
0.001 K | 2.897 77 meters (Radio, FM band)[15] |
Cosmic Microwave Background Radiation | 2.725 48(57) K | 1.063 mm (peak wavelength) |
Water's triple point (precisely by definition) |
273.16 K | 10,608.3 nm (Long wavelength I.R.) |
Incandescent lampB | 2500 K | 1160 nm (Near infrared)C |
Sun’s visible surfaceC[16] | 5778 K | 501.5 nm (Green light) |
Lightning bolt’s channel |
28,000 K | 100 nm (Far Ultraviolet light) |
Sun’s core | 16 MK | 0.18 nm (X-rays) |
Thermonuclear explosion (peak temperature)[17] |
350 MK | 8.3 × 10−3 nm (Gamma rays) |
Sandia National Labs’ Z machine D[18] |
2 GK | 1.4 × 10−3 nm (Gamma rays) |
Core of a high–mass star on its last day[19] |
3 GK | 1 × 10−3 nm (Gamma rays) |
Merging binary neutron star system [20] |
350 GK | 8 × 10−6 nm (Gamma rays) |
Gamma-ray burst progenitors[21] |
1 TK | 3 × 10−6 nm (Gamma rays) |
Relativistic Heavy Ion Collider[22] |
1 TK | 3 × 10−6 nm (Gamma rays) |
CERN’s proton vs. nucleus collisions[23] |
10 TK | 3 × 10−7 nm (Gamma rays) |
Universe 5.391 × 10−44 s after the Big Bang |
1.417 × 1032 K | 1.616 × 10−26 nm (Planck frequency)[24] |
A The 2500 K value is approximate.
B For a true blackbody (which tungsten filaments are not). Tungsten filaments' emissivity is greater at shorter wavelengths, which makes them appear whiter.
C Effective photosphere temperature.
D For a true blackbody (which the plasma was not). The Z machine's dominant emission originated from 40 MK electrons (soft x–ray emissions) within the plasma.
The heat of phase changes
The kinetic energy of particle motion is just one contributor to the total thermal energy in a substance; another is phase transitions, which are the potential energy of molecular bonds that can form in a substance as it cools (such as during condensing and freezing). The thermal energy required for a phase transition is called latent heat. This phenomenon may more easily be grasped by considering it in the reverse direction: latent heat is the energy required to break chemical bonds (such as during evaporation and melting). Almost everyone is familiar with the effects of phase transitions; for instance, steam at 100 °C can cause severe burns much faster than the 100 °C air from a hair dryer. This occurs because a large amount of latent heat is liberated as steam condenses into liquid water on the skin.
Even though thermal energy is liberated or absorbed during phase transitions, pure chemical elements, compounds, and eutectic alloys exhibit no temperature change whatsoever while they undergo them (see Fig. 7, below right). Consider one particular type of phase transition: melting. When a solid is melting, crystal lattice chemical bonds are being broken apart; the substance is transitioning from what is known as a more ordered state to a less ordered state. In Fig. 7, the melting of ice is shown within the lower left box heading from blue to green.
At one specific thermodynamic point, the melting point (which is 0 °C across a wide pressure range in the case of water), all the atoms or molecules are, on average, at the maximum energy threshold their chemical bonds can withstand without breaking away from the lattice. Chemical bonds are all-or-nothing forces: they either hold fast, or break; there is no in-between state. Consequently, when a substance is at its melting point, every joule of added thermal energy only breaks the bonds of a specific quantity of its atoms or molecules,[25] converting them into a liquid of precisely the same temperature; no kinetic energy is added to translational motion (which is what gives substances their temperature). The effect is rather like popcorn: at a certain temperature, additional thermal energy can't make the kernels any hotter until the transition (popping) is complete. If the process is reversed (as in the freezing of a liquid), thermal energy must be removed from a substance.
As stated above, the thermal energy required for a phase transition is called latent heat. In the specific cases of melting and freezing, it's called enthalpy of fusion or heat of fusion. If the molecular bonds in a crystal lattice are strong, the heat of fusion can be relatively great, typically in the range of 6 to 30 kJ per mole for water and most of the metallic elements.[26] If the substance is one of the monatomic gases, (which have little tendency to form molecular bonds) the heat of fusion is more modest, ranging from 0.021 to 2.3 kJ per mole.[27] Relatively speaking, phase transitions can be truly energetic events. To completely melt ice at 0 °C into water at 0 °C, one must add roughly 80 times the thermal energy as is required to increase the temperature of the same mass of liquid water by one degree Celsius. The metals' ratios are even greater, typically in the range of 400 to 1200 times.[28] And the phase transition of boiling is much more energetic than freezing. For instance, the energy required to completely boil or vaporize water (what is known as enthalpy of vaporization) is roughly 540 times that required for a one-degree increase.[29]
Water's sizable enthalpy of vaporization is why one's skin can be burned so quickly as steam condenses on it (heading from red to green in Fig. 7 above). In the opposite direction, this is why one's skin feels cool as liquid water on it evaporates (a process that occurs at a sub-ambient wet-bulb temperature that is dependent on relative humidity). Water's highly energetic enthalpy of vaporization is also an important factor underlying why solar pool covers (floating, insulated blankets that cover swimming pools when not in use) are so effective at reducing heating costs: they prevent evaporation. For instance, the evaporation of just 20 mm of water from a 1.29-meter-deep pool chills its water 8.4 degrees Celsius (15.1 °F).
Internal energy
The total energy of all particle motion translational and internal, including that of conduction electrons, plus the potential energy of phase changes, plus zero-point energy[3] comprise the internal energy of a substance.
Internal energy at absolute zero
As a substance cools, different forms of internal energy and their related effects simultaneously decrease in magnitude: the latent heat of available phase transitions is liberated as a substance changes from a less ordered state to a more ordered state; the translational motions of atoms and molecules diminish (their kinetic temperature decreases); the internal motions of molecules diminish (their internal temperature decreases); conduction electrons (if the substance is an electrical conductor) travel somewhat slower;[30] and black-body radiation's peak emittance wavelength increases (the photons' energy decreases). When the particles of a substance are as close as possible to complete rest and retain only ZPE-induced quantum mechanical motion, the substance is at the temperature of absolute zero (T=0).
Note that whereas absolute zero is the point of zero thermodynamic temperature and is also the point at which the particle constituents of matter have minimal motion, absolute zero is not necessarily the point at which a substance contains zero thermal energy; one must be very precise with what one means by internal energy. Often, all the phase changes that can occur in a substance, will have occurred by the time it reaches absolute zero. However, this is not always the case. Notably, T=0 helium remains liquid at room pressure and must be under a pressure of at least 25 bar (2.5 MPa) to crystallize. This is because helium's heat of fusion (the energy required to melt helium ice) is so low (only 21 joules per mole) that the motion-inducing effect of zero-point energy is sufficient to prevent it from freezing at lower pressures. Only if under at least 25 bar (2.5 MPa) of pressure will this latent thermal energy be liberated as helium freezes while approaching absolute zero. A further complication is that many solids change their crystal structure to more compact arrangements at extremely high pressures (up to millions of bars, or hundreds of gigapascals). These are known as solid-solid phase transitions wherein latent heat is liberated as a crystal lattice changes to a more thermodynamically favorable, compact one.
The above complexities make for rather cumbersome blanket statements regarding the internal energy in T=0 substances. Regardless of pressure though, what can be said is that at absolute zero, all solids with a lowest-energy crystal lattice such those with a closest-packed arrangement (see Fig. 8, above left) contain minimal internal energy, retaining only that due to the ever-present background of zero-point energy.[3] [31] One can also say that for a given substance at constant pressure, absolute zero is the point of lowest enthalpy (a measure of work potential that takes internal energy, pressure, and volume into consideration).[32] Lastly, it is always true to say that all T=0 substances contain zero kinetic thermal energy.[3] [7]
Practical applications for thermodynamic temperature
Thermodynamic temperature is useful not only for scientists, it can also be useful for lay-people in many disciplines involving gases. By expressing variables in absolute terms and applying Gay–Lussac's law of temperature/pressure proportionality, solutions to everyday problems are straightforward; for instance, calculating how a temperature change affects the pressure inside an automobile tire. If the tire has a cold pressure of 200 kPa-gage , then in absolute terms (relative to a vacuum), its pressure is 300 kPa-absolute.[33][34][35] Room temperature ("cold" in tire terms) is 296 K. If the tire temperature is 20 °C hotter (20 kelvins), the solution is calculated as 316 K⁄296 K = 6.8% greater thermodynamic temperature and absolute pressure; that is, a pressure of 320 kPa-absolute, which is 220 kPa-gage.
Definition of thermodynamic temperature
The thermodynamic temperature is defined by the ideal gas law and its consequences. It can be linked also to the second law of thermodynamics. The thermodynamic temperature can be shown to have special properties, and in particular can be seen to be uniquely defined (up to some constant multiplicative factor) by considering the efficiency of idealized heat engines. Thus the ratio T2/T1 of two temperaturesT1 andT2 is the same in all absolute scales.
Strictly speaking, the temperature of a system is well-defined only if it is at thermal equilibrium. From a microscopic viewpoint, a material is at thermal equilibrium if the quantity of heat between its individual particles cancel out. There are many possible scales of temperature, derived from a variety of observations of physical phenomena.
Loosely stated, temperature differences dictate the direction of heat between two systems such that their combined energy is maximally distributed among their lowest possible states. We call this distribution "entropy". To better understand the relationship between temperature and entropy, consider the relationship between heat, work and temperature illustrated in the Carnot heat engine. The engine converts heat into work by directing a temperature gradient between a higher temperature heat source, TH, and a lower temperature heat sink, TC, through a gas filled piston. The work done per cycle is equal to the difference between the heat supplied to the engine by TH, qH, and the heat supplied to TC by the engine, qC. The efficiency of the engine is the work divided by the heat put into the system or
where wcy is the work done per cycle. Thus the efficiency depends only on qC/qH.
Carnot's theorem states that all reversible engines operating between the same heat reservoirs are equally efficient. Thus, any reversible heat engine operating between temperatures T1 and T2 must have the same efficiency, that is to say, the efficiency is the function of only temperatures
In addition, a reversible heat engine operating between temperatures T1 and T3 must have the same efficiency as one consisting of two cycles, one between T1 and another (intermediate) temperature T2, and the second between T2 andT3. If this were not the case, then energy (in the form of Q) will be wasted or gained, resulting in different overall efficiencies every time a cycle is split into component cycles; clearly a cycle can be composed of any number of smaller cycles.
With this understanding of Q1, Q2 and Q3, we note also that mathematically,
But the first function is NOT a function of T2, therefore the product of the final two functions MUST result in the removal of T2 as a variable. The only way is therefore to define the function f as follows:
and
so that
i.e. The ratio of heat exchanged is a function of the respective temperatures at which they occur. We can choose any monotonic function for our ; it is a matter of convenience and convention that we choose . Choosing then one fixed reference temperature (i.e. triple point of water), we establish the thermodynamic temperature scale.
It is to be noted that such a definition coincides with that of the ideal gas derivation; also it is this definition of the thermodynamic temperature that enables us to represent the Carnot efficiency in terms of TH and TC, and hence derive that the (complete) Carnot cycle is isentropic:
Substituting this back into our first formula for efficiency yields a relationship in terms of temperature:
Notice that for TC=0 the efficiency is 100% and that efficiency becomes greater than 100% for TC<0, which cases are unrealistic. Subtracting the right hand side of Equation 4 from the middle portion and rearranging gives
where the negative sign indicates heat ejected from the system. The generalization of this equation is Clausius theorem, which suggests the existence of a state function S (i.e., a function which depends only on the state of the system, not on how it reached that state) defined (up to an additive constant) by
where the subscript indicates heat transfer in a reversible process. The function S corresponds to the entropy of the system, mentioned previously, and the change of S around any cycle is zero (as is necessary for any state function). Equation 5 can be rearranged to get an alternative definition for temperature in terms of entropy and heat (to avoid logic loop, we should first define entropy through statistical mechanics):
For a system in which the entropy S is a function S(E) of its energy E, the thermodynamic temperature T is therefore given by
so that the reciprocal of the thermodynamic temperature is the rate of increase of entropy with energy.
History
- Ca. 485 BC: Parmenides in his treatise "On Nature" postulated the existence of primum frigidum, a hypothetical elementary substance source of all cooling or cold in the world.[36]
- 1702–1703: Guillaume Amontons (1663–1705) published two papers that may be used to credit him as being the first researcher to deduce the existence of a fundamental (thermodynamic) temperature scale featuring an absolute zero. He made the discovery while endeavoring to improve upon the air thermometers in use at the time. His J-tube thermometers comprised a mercury column that was supported by a fixed mass of air entrapped within the sensing portion of the thermometer. In thermodynamic terms, his thermometers relied upon the volume / temperature relationship of gas under constant pressure. His measurements of the boiling point of water and the melting point of ice showed that regardless of the mass of air trapped inside his thermometers or the weight of mercury the air was supporting, the reduction in air volume at the ice point was always the same ratio. This observation led him to posit that a sufficient reduction in temperature would reduce the air volume to zero. In fact, his calculations projected that absolute zero was equivalent to −240 °C—only 33.15 degrees short of the true value of −273.15 °C.
- 1742: Anders Celsius (1701–1744) created a "backwards" version of the modern Celsius temperature scale. In Celsius's original scale, zero represented the boiling point of water and 100 represented the melting point of ice. In his paper Observations of two persistent degrees on a thermometer, he recounted his experiments showing that ice's melting point was effectively unaffected by pressure. He also determined with remarkable precision how water's boiling point varied as a function of atmospheric pressure. He proposed that zero on his temperature scale (water's boiling point) would be calibrated at the mean barometric pressure at mean sea level.
- 1744: Coincident with the death of Anders Celsius, the famous botanist Carl Linnaeus (1707–1778) effectively reversed[37] Celsius's scale upon receipt of his first thermometer featuring a scale where zero represented the melting point of ice and 100 represented water's boiling point. The custom-made linnaeus-thermometer, for use in his greenhouses, was made by Daniel Ekström, Sweden's leading maker of scientific instruments at the time. For the next 204 years, the scientific and thermometry communities worldwide referred to this scale as the centigrade scale. Temperatures on the centigrade scale were often reported simply as degrees or, when greater specificity was desired, degrees centigrade. The symbol for temperature values on this scale was °C (in several formats over the years). Because the term centigrade was also the French-language name for a unit of angular measurement (one-hundredth of a right angle) and had a similar connotation in other languages, the term "centesimal degree" was used when very precise, unambiguous language was required by international standards bodies such as the International Bureau of Weights and Measures (Bureau international des poids et mesures) (BIPM). The 9th CGPM (General Conference on Weights and Measures (Conférence générale des poids et mesures) and the CIPM (International Committee for Weights and Measures (Comité international des poids et mesures) formally adopted[38] degree Celsius (symbol: °C) in 1948.[39]
- 1777: In his book Pyrometrie (Berlin: Haude & Spener, 1779) completed four months before his death, Johann Heinrich Lambert (1728–1777), sometimes incorrectly referred to as Joseph Lambert, proposed an absolute temperature scale based on the pressure/temperature relationship of a fixed volume of gas. This is distinct from the volume/temperature relationship of gas under constant pressure that Guillaume Amontons discovered 75 years earlier. Lambert stated that absolute zero was the point where a simple straight-line extrapolation reached zero gas pressure and was equal to −270 °C.
- Circa 1787: Notwithstanding the work of Guillaume Amontons 85 years earlier, Jacques Alexandre César Charles (1746–1823) is often credited with discovering, but not publishing, that the volume of a gas under constant pressure is proportional to its absolute temperature. The formula he created was V1/T1 = V2/T2.
- 1802: Joseph Louis Gay-Lussac (1778–1850) published work (acknowledging the unpublished lab notes of Jacques Charles fifteen years earlier) describing how the volume of gas under constant pressure changes linearly with its absolute (thermodynamic) temperature. This behavior is called Charles's Law and is one of the gas laws. His are the first known formulas to use the number 273 for the expansion coefficient of gas relative to the melting point of ice (indicating that absolute zero was equivalent to −273 °C).
- 1848: William Thomson, (1824–1907) also known as Lord Kelvin, wrote in his paper, On an Absolute Thermometric Scale, of the need for a scale whereby infinite cold (absolute zero) was the scale's null point, and which used the degree Celsius for its unit increment. Like Gay-Lussac, Thomson calculated that absolute zero was equivalent to −273 °C on the air thermometers of the time. This absolute scale is known today as the kelvin thermodynamic temperature scale. It's noteworthy that Thomson's value of −273 was actually derived from 0.00366, which was the accepted expansion coefficient of gas per degree Celsius relative to the ice point. The inverse of −0.00366 expressed to five significant digits is −273.22 °C which is remarkably close to the true value of −273.15 °C.
- 1859: William John Macquorn Rankine (1820–1872) proposed a thermodynamic temperature scale similar to William Thomson's but which used the degree Fahrenheit for its unit increment. This absolute scale is known today as the Rankine thermodynamic temperature scale.
- 1877–1884: Ludwig Boltzmann (1844–1906) made major contributions to thermodynamics through an understanding of the role that particle kinetics and black body radiation played. His name is now attached to several of the formulas used today in thermodynamics.
- Circa 1930s: Gas thermometry experiments carefully calibrated to the melting point of ice and boiling point of water showed that absolute zero was equivalent to −273.15 °C.
- 1948: Resolution 3 of the 9th CGPM (Conférence Générale des Poids et Mesures, also known as the General Conference on Weights and Measures) fixed the triple point of water at precisely 0.01 °C. At this time, the triple point still had no formal definition for its equivalent kelvin value, which the resolution declared "will be fixed at a later date". The implication is that if the value of absolute zero measured in the 1930s was truly −273.15 °C, then the triple point of water (0.01 °C) was equivalent to 273.16 K. Additionally, both the CIPM (Comité international des poids et mesures, also known as the International Committee for Weights and Measures) and the CGPM formally adopted the name Celsius for the degree Celsius and the Celsius temperature scale. [39]
- 1954: Resolution 3 of the 10th CGPM gave the kelvin scale its modern definition by choosing the triple point of water as its second defining point and assigned it a temperature of precisely 273.16 kelvins (what was actually written 273.16 degrees Kelvin at the time). This, in combination with Resolution 3 of the 9th CGPM, had the effect of defining absolute zero as being precisely zero kelvins and −273.15 °C.
- 1967/1968: Resolution 3 of the 13th CGPM renamed the unit increment of thermodynamic temperature kelvin, symbol K, replacing degree absolute, symbol °K. Further, feeling it useful to more explicitly define the magnitude of the unit increment, the 13th CGPM also decided in Resolution 4 that "The kelvin, unit of thermodynamic temperature, is the fraction 1/273.16 of the thermodynamic temperature of the triple point of water".
- 2005: The CIPM (Comité International des Poids et Mesures, also known as the International Committee for Weights and Measures) affirmed that for the purposes of delineating the temperature of the triple point of water, the definition of the kelvin thermodynamic temperature scale would refer to water having an isotopic composition defined as being precisely equal to the nominal specification of Vienna Standard Mean Ocean Water.
See also
- Absolute hot
- Absolute zero
- Planck temperature
- Hagedorn temperature
- Adiabatic process
- Black-body
- Boiling
- Boltzmann constant
- Brownian motion
- Carnot heat engine
- Chemical bond
- Condensation
- Convection
- Degrees of freedom
- Delocalized electron
- Diffusion
- Elastic collision
- Electron
- Energy
- Energy conversion efficiency
- Enthalpy
- Entropy
- Equipartition theorem
- Evaporation
- Fahrenheit
- First law of thermodynamics
- Freezing
- Gas laws
- Heat
- Heat conduction
- Heat engine
- Heat death of the universe
- Internal energy
- International System of Quantities
- ITS-90
- Ideal gas law
- Joule
- Kelvin
- Kinetic energy
- Latent heat
- Laws of thermodynamics
- Maxwell–Boltzmann distribution
- Melting
- Mole
- Molecule
- Orders of magnitude (temperature)
- Phase transition
- Phonon
- Planck's law of black-body radiation
- Potential energy
- Quantum mechanics:
- Rankine scale
- Specific heat capacity
- Standard enthalpy change of fusion
- Standard enthalpy change of vaporization
- Stefan–Boltzmann law
- Sublimation
- Temperature
- Temperature conversion formulas
- Thermal conductivity
- Thermal radiation
- Thermodynamic beta
- Thermodynamic equations
- Thermodynamic equilibrium
- Thermodynamics
- Thermodynamics Category (list of articles)
- Timeline of heat engine technology
- Timeline of temperature and pressure measurement technology
- Triple point
- Universal gas constant
- Vienna Standard Mean Ocean Water (VSMOW)
- Wien's displacement law
- Work (Mechanical)
- Work (thermodynamics)
- Zero-point energy
Notes
- In the following notes, wherever numeric equalities are shown in concise form, such as 1.85487(14)×1043, the two digits between the parentheses denotes the uncertainty at 1-σ (1 standard deviation, 68% confidence level) in the two least significant digits of the significand.
- ^ Rankine, W. J. M., "A manual of the steam engine and other prime movers", Richard Griffin and Co., London (1859), p. 306–307.
- ^ William Thomson, 1st Baron Kelvin, "Heat", Adam and Charles Black, Edinburgh (1880), p. 39.
- ^ a b c d e While scientists are achieving temperatures ever closer to absolute zero, they can not fully achieve a state of zero temperature. However, even if scientists could remove all kinetic thermal energy from matter, quantum mechanical zero-point energy (ZPE) causes particle motion that can never be eliminated. Encyclopædia Britannica Online defines zero-point energy as the "vibrational energy that molecules retain even at the absolute zero of temperature". ZPE is the result of all-pervasive energy fields in the vacuum between the fundamental particles of nature; it is responsible for the Casimir effect and other phenomena. See Zero Point Energy and Zero Point Field. See also Solid Helium Archived 2008-02-12 at the Wayback Machine by the University of Alberta's Department of Physics to learn more about ZPE's effect on Bose–Einstein condensates of helium.
Although absolute zero (T=0) is not a state of zero molecular motion, it is the point of zero temperature and, in accordance with the Boltzmann constant, is also the point of zero particle kinetic energy and zero kinetic velocity. To understand how atoms can have zero kinetic velocity and simultaneously be vibrating due to ZPE, consider the following thought experiment: two T=0 helium atoms in zero gravity are carefully positioned and observed to have an average separation of 620 pm between them (a gap of ten atomic diameters). It's an "average" separation because ZPE causes them to jostle about their fixed positions. Then one atom is given a kinetic kick of precisely 83 yoctokelvins (1 yK = 1×10−24 K). This is done in a way that directs this atom's velocity vector at the other atom. With 83 yK of kinetic energy between them, the 620 pm gap through their common barycenter would close at a rate of 719 pm/s and they would collide after 0.862 second. This is the same speed as shown in the Fig. 1 animation above. Before being given the kinetic kick, both T=0 atoms had zero kinetic energy and zero kinetic velocity because they could persist indefinitely in that state and relative orientation even though both were being jostled by ZPE. At T=0, no kinetic energy is available for transfer to other systems. The Boltzmann constant and its related formulas describe the realm of particle kinetics and velocity vectors whereas ZPE is an energy field that jostles particles in ways described by the mathematics of quantum mechanics. In atomic and molecular collisions in gases, ZPE introduces a degree of chaos, i.e., unpredictability, to rebound kinetics; it is as likely that there will be less ZPE-induced particle motion after a given collision as more. This random nature of ZPE is why it has no net effect upon either the pressure or volume of any bulk quantity (a statistically significant quantity of particles) of T>0 K gases. However, in T=0 condensed matter; e.g., solids and liquids, ZPE causes inter-atomic jostling where atoms would otherwise be perfectly stationary. Inasmuch as the real-world effects that ZPE has on substances can vary as one alters a thermodynamic system (for example, due to ZPE, helium won't freeze unless under a pressure of at least 25 bar or 2.5 MPa), ZPE is very much a form of thermal energy and may properly be included when tallying a substance's internal energy.
Note too that absolute zero serves as the baseline atop which thermodynamics and its equations are founded because they deal with the exchange of thermal energy between "systems" (a plurality of particles and fields modeled as an average). Accordingly, one may examine ZPE-induced particle motion within a system that is at absolute zero but there can never be a net outflow of thermal energy from such a system. Also, the peak emittance wavelength of black-body radiation shifts to infinity at absolute zero; indeed, a peak no longer exists and black-body photons can no longer escape. Because of ZPE, however, virtual photons are still emitted at T=0. Such photons are called "virtual" because they can't be intercepted and observed. Furthermore, this zero-point radiation has a unique zero-point spectrum. However, even though a T=0 system emits zero-point radiation, no net heat flow Q out of such a system can occur because if the surrounding environment is at a temperature greater than T=0, heat will flow inward, and if the surrounding environment is at T=0, there will be an equal flux of ZP radiation both inward and outward. A similar Q equilibrium exists at T=0 with the ZPE-induced spontaneous emission of photons (which is more properly called a stimulated emission in this context). The graph at upper right illustrates the relationship of absolute zero to zero-point energy. The graph also helps in the understanding of how zero-point energy got its name: it is the vibrational energy matter retains at the zero-kelvin point. Derivation of the classical electromagnetic zero-point radiation spectrum via a classical thermodynamic operation involving van der Waals forces, Daniel C. Cole, Physical Review A, 42 (1990) 1847.
- ^ At non-relativistic temperatures of less than about 30 GK, classical mechanics are sufficient to calculate the velocity of particles. At 30 GK, individual neutrons (the constituent of neutron stars and one of the few materials in the universe with temperatures in this range) have a 1.0042 γ (gamma or Lorentz factor). Thus, the classic Newtonian formula for kinetic energy is in error less than half a percent for temperatures less than 30 GK.
- ^ Even room–temperature air has an average molecular translational speed (not vector-isolated velocity) of 1822 km/hour. This is relatively fast for something the size of a molecule considering there are roughly 2.42×1016 of them crowded into a single cubic millimeter. Assumptions: Average molecular weight of wet air = 28.838 g/mol and T = 296.15 K. Assumption's primary variables: An altitude of 194 meters above mean sea level (the world–wide median altitude of human habitation), an indoor temperature of 23 °C, a dewpoint of 9 °C (40.85% relative humidity), and 760 mmHg (101.325 kPa) sea level–corrected barometric pressure.
- ^ Adiabatic Cooling of Cesium to 700 nK in an Optical Lattice, A. Kastberg et al., Physical Review Letters 74 (1995) 1542 doi:10.1103/PhysRevLett.74.1542. It's noteworthy that a record cold temperature of 450 pK in a Bose–Einstein condensate of sodium atoms (achieved by A. E. Leanhardt et al.. of MIT) equates to an average vector-isolated atom velocity of 0.4 mm/s and an average atom speed of 0.7 mm/s.
- ^ a b The rate of translational motion of atoms and molecules is calculated based on thermodynamic temperature as follows:
- is the vector-isolated mean velocity of translational particle motion in m/s
- kB is the Boltzmann constant = 1.3806504(24)×10−23 J/K
- T is the thermodynamic temperature in kelvins
- m is the molecular mass of substance in kilograms
- is the mean speed of translational particle motion in m/s
- ^ The internal degrees of freedom of molecules cause their external surfaces to vibrate and can also produce overall spinning motions (what can be likened to the jiggling and spinning of an otherwise stationary water balloon). If one examines a single molecule as it impacts a containers' wall, some of the kinetic energy borne in the molecule's internal degrees of freedom can constructively add to its translational motion during the instant of the collision and extra kinetic energy will be transferred into the container's wall. This would induce an extra, localized, impulse-like contribution to the average pressure on the container. However, since the internal motions of molecules are random, they have an equal probability of destructively interfering with translational motion during a collision with a container's walls or another molecule. Averaged across any bulk quantity of a gas, the internal thermal motions of molecules have zero net effect upon the temperature, pressure, or volume of a gas. Molecules' internal degrees of freedom simply provide additional locations where internal energy is stored. This is precisely why molecular-based gases have greater specific heat capacity than monatomic gases (where additional thermal energy must be added to achieve a given temperature rise).
- ^ When measured at constant-volume since different amounts of work must be performed if measured at constant-pressure. Nitrogen's CvH (100 kPa, 20 °C) equals 20.8 J mol−1 K−1 vs. the monatomic gases, which equal 12.4717 J mol−1 K−1. Citations: W.H. Freeman's Physical Chemistry, Part 3: Change (422 kB PDF, here Archived 2007-09-27 at Archive-It), Exercise 21.20b, p. 787. Also Georgia State University's Molar Specific Heats of Gases.
- ^ The speed at which thermal energy equalizes throughout the volume of a gas is very rapid. However, since gases have extremely low density relative to solids, the heat flux (the thermal power passing per area) through gases is comparatively low. This is why the dead-air spaces in multi-pane windows have insulating qualities.
- ^ Diamond is a notable exception. Highly quantized modes of phonon vibration occur in its rigid crystal lattice. Therefore, not only does diamond have exceptionally poor specific heat capacity, it also has exceptionally high thermal conductivity.
- ^ Correlation is 752 (W m−1 K−1) /(MS·cm), σ = 81, through a 7:1 range in conductivity. Value and standard deviation based on data for Ag, Cu, Au, Al, Ca, Be, Mg, Rh, Ir, Zn, Co, Ni, Os, Fe, Pa, Pt, and Sn. Citation: Data from CRC Handbook of Chemistry and Physics, 1st Student Edition and this link to Web Elements' home page.
- ^ The cited emission wavelengths are for true black bodies in equilibrium. In this table, only the sun so qualifies. CODATA 2006 recommended value of 2.897 7685(51) × 10−3 m K used for Wien displacement law constant b.
- ^ A record cold temperature of 450 ±80 pK in a Bose–Einstein condensate (BEC) of sodium atoms was achieved in 2003 by researchers at MIT. Citation: Cooling Bose–Einstein Condensates Below 500 Picokelvin, A. E. Leanhardt et al., Science 301, 12 Sept. 2003, Pg. 1515. It’s noteworthy that this record’s peak emittance black-body wavelength of 6,400 kilometers is roughly the radius of Earth.
- ^ The peak emittance wavelength of 2.897 77 m is a frequency of 103.456 MHz
- ^ Measurement was made in 2002 and has an uncertainty of ±3 kelvins. A 1989 measurement produced a value of 5777 ±2.5 K. Citation: Overview of the Sun (Chapter 1 lecture notes on Solar Physics by Division of Theoretical Physics, Dept. of Physical Sciences, University of Helsinki). Download paper (252 kB PDF Archived 2014-08-23 at the Wayback Machine)
- ^ The 350 MK value is the maximum peak fusion fuel temperature in a thermonuclear weapon of the Teller–Ulam configuration (commonly known as a “hydrogen bomb”). Peak temperatures in Gadget-style fission bomb cores (commonly known as an “atomic bomb”) are in the range of 50 to 100 MK. Citation: Nuclear Weapons Frequently Asked Questions, 3.2.5 Matter At High Temperatures. Link to relevant Web page. All referenced data was compiled from publicly available sources.
- ^ Peak temperature for a bulk quantity of matter was achieved by a pulsed-power machine used in fusion physics experiments. The term “bulk quantity” draws a distinction from collisions in particle accelerators wherein high “temperature” applies only to the debris from two subatomic particles or nuclei at any given instant. The >2 GK temperature was achieved over a period of about ten nanoseconds during “shot Z1137.” In fact, the iron and manganese ions in the plasma averaged 3.58 ±0.41 GK (309 ±35 keV) for 3 ns (ns 112 through 115). Citation: Ion Viscous Heating in a Magnetohydrodynamically Unstable Z Pinch at Over 2 × 109 Kelvin, M. G. Haines et al., Physical Review Letters 96, Issue 7, id. 075003. Link to Sandia’s news release. Archived 2006-07-02 at the Wayback Machine
- ^ Core temperature of a high–mass (>8–11 solar masses) star after it leaves the main sequence on the Hertzsprung–Russell diagram and begins the alpha process (which lasts one day) of fusing silicon–28 into heavier elements in the following steps: sulfur–32 → argon–36 → calcium–40 → titanium–44 → chromium–48 → iron–52 → nickel–56. Within minutes of finishing the sequence, the star explodes as a Type II supernova. Citation: Stellar Evolution: The Life and Death of Our Luminous Neighbors (by Arthur Holland and Mark Williams of the University of Michigan). Link to Web site. More informative links can be found here, and here Archived 2011-08-14 at the Wayback Machine, and a concise treatise on stars by NASA is here. Archived July 20, 2015, at the Wayback Machine
- ^ Based on a computer model that predicted a peak internal temperature of 30 MeV (350 GK) during the merger of a binary neutron star system (which produces a gamma–ray burst). The neutron stars in the model were 1.2 and 1.6 solar masses respectively, were roughly 20 km in diameter, and were orbiting around their barycenter (common center of mass) at about 390 Hz during the last several milliseconds before they completely merged. The 350 GK portion was a small volume located at the pair’s developing common core and varied from roughly 1 to 7 km across over a time span of around 5 ms. Imagine two city-sized objects of unimaginable density orbiting each other at the same frequency as the G4 musical note (the 28th white key on a piano). It’s also noteworthy that at 350 GK, the average neutron has a vibrational speed of 30% the speed of light and a relativistic mass (m) 5% greater than its rest mass (m0). Citation: Torus Formation in Neutron Star Mergers and Well-Localized Short Gamma-Ray Bursts, R. Oechslin et al. of Max Planck Institute for Astrophysics., arXiv:astro-ph/0507099 v2, 22 Feb. 2006. Download paper (725 kB PDF) (from Cornell University Library’s arXiv.org server). To view a browser-based summary of the research, click here.
- ^ NewScientist: Eight extremes: The hottest thing in the universe, 07 March 2011, which stated “While the details of this process are currently unknown, it must involve a fireball of relativistic particles heated to something in the region of a trillion kelvin[s]”
- ^ Results of research by Stefan Bathe using the PHENIX detector on the Relativistic Heavy Ion Collider at Brookhaven National Laboratory in Upton, New York, U.S.A. Bathe has studied gold-gold, deuteron-gold, and proton-proton collisions to test the theory of quantum chromodynamics, the theory of the strong force that holds atomic nuclei together. Link to news release.
- ^ Citation: How do physicists study particles? Archived 2007-10-11 at the Wayback Machine by CERN.
- ^ The Planck frequency equals 1.854 87(14) × 1043 Hz (which is the reciprocal of one Planck time). Photons at the Planck frequency have a wavelength of one Planck length. The Planck temperature of 1.416 79(11) × 1032 K equates to a calculated b /T = λmax wavelength of 2.045 31(16) × 10−26 nm. However, the actual peak emittance wavelength quantizes to the Planck length of 1.616 24(12) × 10−26 nm.
- ^ Water's enthalpy of fusion (0 °C, 101.325 kPa) equates to 0.062284 eV per molecule so adding one joule of thermal energy to 0 °C water ice causes 1.0021×1020 water molecules to break away from the crystal lattice and become liquid.
- ^ Water's enthalpy of fusion is 6.0095 kJ mol−1 K−1 (0 °C, 101.325 kPa). Citation: Water Structure and Science, Water Properties, Enthalpy of fusion, (0 °C, 101.325 kPa) (by London South Bank University). Link to Web site. The only metals with enthalpies of fusion not in the range of 6–30 J mol−1 K−1 are (on the high side): Ta, W, and Re; and (on the low side) most of the group 1 (alkaline) metals plus Ga, In, Hg, Tl, Pb, and Np. Citation: This link to Web Elements' home page.
- ^ Xenon value citation: This link to WebElements' xenon data (available values range from 2.3 to 3.1 kJ/mol). It is also noteworthy that helium's heat of fusion of only 0.021 kJ/mol is so weak of a bonding force that zero-point energy prevents helium from freezing unless it is under a pressure of at least 25 atmospheres.
- ^ CRC Handbook of Chemistry and Physics, 1st Student Edition and Web Elements.
- ^ H2Ospecific heat capacity, Cp = 0.075327 kJ mol−1 K−1 (25 °C); Enthalpy of fusion = 6.0095 kJ/mol (0 °C, 101.325 kPa); Enthalpy of vaporization (liquid) = 40.657 kJ/mol (100 °C). Citation: Water Structure and Science, Water Properties (by London South Bank University). Link to Web site.
- ^ Mobile conduction electrons are delocalized, i.e. not tied to a specific atom, and behave rather like a sort of quantum gas due to the effects of zero-point energy. Consequently, even at absolute zero, conduction electrons still move between atoms at the Fermi velocity of about 1.6×106 m/s. Kinetic thermal energy adds to this speed and also causes delocalized electrons to travel farther away from the nuclei.
- ^ No other crystal structure can exceed the 74.048% packing density of a closest-packed arrangement. The two regular crystal lattices found in nature that have this density are hexagonal close packed (HCP) and face-centered cubic (FCC). These regular lattices are at the lowest possible energy state. Diamond is a closest-packed structure with an FCC crystal lattice. Note too that suitable crystalline chemical compounds, although usually composed of atoms of different sizes, can be considered as closest-packed structures when considered at the molecular level. One such compound is the common mineral known as magnesium aluminum spinel (MgAl2O4). It has a face-centered cubic crystal lattice and no change in pressure can produce a lattice with a lower energy state.
- ^ Nearly half of the 92 naturally occurring chemical elements that can freeze under a vacuum also have a closest-packed crystal lattice. This set includes beryllium, osmium, neon, and iridium (but excludes helium), and therefore have zero latent heat of phase transitions to contribute to internal energy (symbol: U). In the calculation of enthalpy (formula: H = U + pV), internal energy may exclude different sources of thermal energy (particularly ZPE) depending on the nature of the analysis. Accordingly, all T=0 closest-packed matter under a perfect vacuum has either minimal or zero enthalpy, depending on the nature of the analysis. Use Of Legendre Transforms In Chemical Thermodynamics, Robert A. Alberty, Pure Appl.Chem., 73 (2001) 1349.
- ^ Pressure also must be in absolute terms. The air still in a tire at 0 kPa-gage expands too as it gets hotter. It's not uncommon for engineers to overlook that one must work in terms of absolute pressure when compensating for temperature. For instance, a dominant manufacturer of aircraft tires published a document on temperature-compensating tire pressure, which used gage pressure in the formula. However, the high gage pressures involved (180 psi; 12.4 bar; 1.24 MPa) means the error would be quite small. With low-pressure automobile tires, where gage pressures are typically around 2 bar (200 kPa), failing to adjust to absolute pressure results in a significant error. Referenced document: Aircraft Tire Ratings (155 kB PDF, here).
- ^ Regarding the spelling "gage" vs. "gauge" in the context of pressures measured relative to atmospheric pressure, the preferred spelling varies by country and even by industry. Further, both spellings are often used within a particular industry or country. Industries in British English-speaking countries typically use the spelling "gauge pressure" to distinguish it from the pressure-measuring instrument, which in the U.K., is spelled pressure gage. For the same reason, many of the largest American manufacturers of pressure transducers and instrumentation use the spelling gage pressure (the convention used here) in their formal documentation to distinguish it from the instrument, which is spelled pressure gauge. (see Honeywell-Sensotec's FAQ page and Fluke Corporation's product search page).
- ^ A difference of 100 kPa is used here instead of the 101.325 kPa value of one standard atmosphere. In 1982, the International Union of Pure and Applied Chemistry (IUPAC) recommended that for the purposes of specifying the physical properties of substances, the standard pressure (atmospheric pressure) should be defined as precisely 100 kPa (≈750.062 Torr). Besides being a round number, this had a very practical effect: relatively few people live and work at precisely sea level; 100 kPa equates to the mean pressure at an altitude of about 112 meters, which is closer to the 194–meter, worldwide median altitude of human habitation. For especially low-pressure or high-accuracy work, true atmospheric pressure must be measured. Citation: IUPAC.org, Gold Book, Standard Pressure
- ^ Absolute Zero and the Conquest of Cold , Shachtman, Tom., Mariner Books, 1999.
- ^ A Brief History of Temperature Measurement and; Uppsala University (Sweden), Linnaeus' thermometer
- ^ bipm.org
- ^ a b According to The Oxford English Dictionary (OED), the term "Celsius's thermometer" had been used at least as early as 1797. Further, the term "The Celsius or Centigrade thermometer" was again used in reference to a particular type of thermometer at least as early as 1850. The OED also cites this 1928 reporting of a temperature: "My altitude was about 5,800 metres, the temperature was 28° Celsius". However, dictionaries seek to find the earliest use of a word or term and are not a useful resource as regards the terminology used throughout the history of science. According to several writings of Dr. Terry Quinn CBE FRS, Director of the BIPM (1988–2004), including Temperature Scales from the early days of thermometry to the 21st century (148 kB PDF, here) as well as Temperature (2nd Edition / 1990 / Academic Press / 0125696817), the term Celsius in connection with the centigrade scale was not used whatsoever by the scientific or thermometry communities until after the CIPM and CGPM adopted the term in 1948. The BIPM wasn't even aware that degree Celsius was in sporadic, non-scientific use before that time. It's also noteworthy that the twelve-volume, 1933 edition of OED did not even have a listing for the word Celsius (but did have listings for both centigrade and centesimal in the context of temperature measurement). The 1948 adoption of Celsius accomplished three objectives:
- All common temperature scales would have their units named after someone closely associated with them; namely, Kelvin, Celsius, Fahrenheit, Réaumur and Rankine.
- Notwithstanding the important contribution of Linnaeus who gave the Celsius scale its modern form, Celsius's name was the obvious choice because it began with the letter C. Thus, the symbol °C that for centuries had been used in association with the name centigrade could continue to be used and would simultaneously inherit an intuitive association with the new name.
- The new name eliminated the ambiguity of the term centigrade, freeing it to refer exclusively to the French-language name for the unit of angular measurement.
External links
- Kinetic Molecular Theory of Gases. An explanation (with interactive animations) of the kinetic motion of molecules and how it affects matter. By David N. Blauch, Department of Chemistry, Davidson College.
- Zero Point Energy and Zero Point Field. A Web site with in-depth explanations of a variety of quantum effects. By Bernard Haisch, of Calphysics Institute.