Movatterモバイル変換


[0]ホーム

URL:


Thermodynamics

 Lavoisier  1743-1794  Pierre Simon Laplace  1749-1827  Sadi Carnot  1796-1832
La chaleur est la force vive qui résulte des
mouvementsinsensibles des molécules d'un corps
.
"Mémoire sur la chaleur" (1780)  Lavoisier &Laplace
 Michon
 
 border
 border
 border

On this site,  see also:

Related Links  (Outside this Site)

Heat and Work (historical) Physics Hypertextbook  by Glenn Elert.
History of Thermodynamics (Wikipedia).
About Temperature  at Project Skymath.
 
Heat and Thermodynamics by  Pr. Jeremy B. Tatum (retired fromUvic).
Notes:  1 |2 |3 |4 |5 |6 |7 |8 |9 |10 |11 |12 |13 |14 |18 |16 |17 |18
 
Thermodynamic Asymmetryin Time  by Craig Callender  (Pr. of Philosophy).
JouleExpansion, Joule-Thomson Expansion  by W. Ron Salzman  (2004).
JonathanOppenheim  (Cambridge)  |  Black Hole Thermodynamics
BlackHole Information Loss  by Warren G. Anderson  (1996)
 
On the Weight of Heat and ThermalEquilibrium in General Relativity  (1930)
Possibilities in relativisticthermodynamics for irreversible processes without exhaustion of free energy by Richard C. Tolman  (1932)
Thermodynamicsand Relativity  by Richard C. Tolman  (AMS, 1932-12-29).
 
Relativistic Thermodynamics for theIntroductory Physics Course  by B. Rothenstein &I. Zaharie  (2003).
Relativistic thermodynamics with angular momentum and its application to
blackbody radiation
 by Nakamura, T.K.  and Okamoto, R.  (August 2004)
Statistical mechanicsof generally covariant quantum theories
by Merced Montesinos  and Carlo Rovelli
On Special andGeneral Relativistic Thermodynamics
by Horst-Heino von Borzeszkowski  and Thoralf Chrobok
Thermodynamics asa Finsler Space with Torsion  by Robert M. Kiehn.
A ProposedRelativistic Thermodynamic 4-vector LTP (before 2004).
 A.A. Ruzmaikin  (2011).
Spin in relativistic quantumtheory Polyzou, Glöckle, Witala  (2012).
 
 Louis de Broglie Costa de BeauregardRelativistic Thermodynamics  at IHP
(Institut Henri Poincaré, founded in 1928) :
Louis de Broglie (1892-1987; Nobel 1929),
OlivierCosta de Beauregard (1911-2007)
Abdelmalek Guessous,Jean Fronteau,etc.
 
The Mechanical Universe (28:46 each episode) David L. Goodstein  (1985-86)
46Entropy (#47)  |  Low Temperatures (#48)49

Ludwig Boltzmann (1844-1906).  Dangerous Knowledge:4 | 5 | 6
Information Paradox(Hawking vs.Susskind &Maldacena):  1 | 2 | 3 | 4 | 5
Thermodynamic Temperature
How hot can it get? by Michael Stevens   (Vsauce, 2012-09-29).
Negative temperatures(1956)  by Philip Moriarty (filmed by Brady Haran).
The Race toward Absolute Zero PBS Nova Documentary   (2016).
"Nonequilibrium Statistical Mechanics"  by Chris Jarzynski  :  1 |2 |3 |
Past & Future (21:31 |22:47) by Richard P. Feynman (Cornell,1964).
Absolute Cold(10:40)  by Matt O'Dowd  (2017-10-11).
The Misunderstood Nature of Entropy(12:19)  by Matt O'Dowd  (2018-07-18).
Impossibility of Perpetual Motion Machines(16:30) Matt O'Dowd  (2019-03-06).
 
Does Information Create the Cosmos? (26:46) with Seth Lloyd, Sean Carroll, Raphael Bousso, Alan H. Guth and Christof Koch (Closer to Truth, 2017-10-09).
 
Biggest Idea #20:  Entropy & Information (1:13:31) Sean Carroll  (2020-08-06).
Biggest Idea #21:  Emergence (1:33:40) by Sean M. Carroll  (2020-08-11).

 
border
border

Thermodynamics,  Heat,  Temperature

For surely the atoms did not hold council, assigning order to each,
flexing their keen minds with questions of places, motions and who goes where.
But shuffled and jumbled in many ways.  In the course of endless time,
they are buffeted,  driven along, chancing upon all motions and combinations.
At last they fall into such an arrangement as would create this  Universe.

Lucretius (99-55 BC)  De rerum natura 

 Vibrating  Piston  Bouncing Particle
(2003-11-11)    
Zeroth law:  Stochastic transfer of energy.

Consider the situation pictured above: A vibrating piston is retained by a spring at one end of a narrowcavity containing a single bouncing particle. We'll assume that the particle and the piston only havehorizontal motions. Everything is frictionless and undamped,as would be the case at the molecular level...

If thecenter of mass of two bodies has a (vectorial) velocity v,  an exchange of a momentum p  between themtranslates into the following transfer of energy E,  in an elastic collision (this expression remainsvalidrelativistically).

E   =  v . p

We would like to characterize an ultimate "equilibrium", in whichthe average energy exchange is zero whenever the pistonand the particle interact.

Let's call V the velocity of the piston, M its mass and x the elongation of the spring thatholds it.  The force holding back the piston is-Mx for someconstant , so that, in the absence of shocks with theparticle, the motion is a sinusoidal function of the time t and the total energy Egiven below remains constant:

x   =   A sin (t + )          V   =   Acos (t + )
E   =   ½ M ( V 2 + 2x 2 )

When the particle, of mass m and velocity v, collides with the piston, we have:

V v     and    X  +  v t   =  A sin (t + )

The first condition expresses the fact that the particle movestowards the piston just before the shock. The second relation involves arandom initial position X, uniformly distributed in the cavity. Only the value of Xmodulo 2A [i.e., the remainder when 2A is divided into X]  is relevant,and if we consider that the length of the cavity is many times greater than 2A,we may assume that X is uniformly distributed in some interval of length 2A.

 Come back later, we're still working on this one...

For an ideal gas, the thermodynamical temperature  T  can be definedby the following formula, which remains true in therelativistic case (providedthat the averaging denoted by angle brackets is understood to be at constant time in the observer's coordinates).  This appears as equation 14-4 in the 1967doctoral dissertation of Abdelmalek Guessous (supervised byLouis de Broglie).

3 k T   =   <(v <v> ).(p <p> ) >

In this,v andp are, respectively, the speed and momentum of eachindividual (monoatomic) molecule.

 Come back later, we're still working on this one...


(2018-06-02)    
A simple case of thermal exchange for didactic purposes.

 Come back later, we're still working on this one...


(2003-11-09)    

Arguably, thermodynamics became a science in 1850, when Rudolf Clausius (1822-1888)published a modern form of the first law of thermodynamics:

In any process, energy can be changed from one form to another,
including heatand work, but it is never created or destroyed
.

 Benjamin Thompson,Count Rumford (1753-1814)

This summarized the observations of several pioneers who helped putan end to the antiquated caloric theory, which was once prevalent:

 Sir Humphry Davy  1778-1829
  • Benjamin CountRumford(1753-1814):  Cannon boring, 1798.
  • Sir Humphry Davy (1778-1829):  1799.
  • Sadi Carnot (1796-1832): Puissance motrice du feu, 1824.
  • James Prescott Joule (1818-1899):  Paddle-wheel experiment, 1840. Hermann von Helmholtz 1821-1894
  • Julius RobertMayer (1814-1878):  Stirring paper pulp, 1842.
  • Hermann Helmholtz (1821-1894):  On "animal heat", 1847.


(2003-11-09)  
 Sadi Carnot  1796-1832; X1812

Heat travels only from hot to cold.  Carnot's principle.

In 1824,Sadi Carnotgave afundamental limitationof steam engines by analyzing the ideal engine now named after him,which turns out to be the most efficient of all possible heat engines. This result is probably best expressed with the fundamentalthermodynamical concepts which were fully developped after Carnot's pioneering work, namely internal energy  (U) and entropy  (S).

Entropy (S) is an extensive  quantity because the entropy of the wholeis the sum of the entropies of the parts. However, entropy is not  conserved as time goes on: It increases in any non-reversible transformation.

 Temperature vs. Entropy in a Carnot Cycle
 
Temperature as a function of Entropy

Carnot's Ideal Engine:  The Carnot Cycle.

  • Hot (A to B)slow isothermal expansion. The hot gas performs work and receivesa quantity of heat  TS.
  • Cooling (B to C) adiabatic expansion. The gas keeps working without any exchange of heat. 
  • Cold (C to D)slow isothermal compression. The gas receives work and gives off (wasted) heatTS.
  • Heating (D to A) adiabatic compression from outside work (flywheel)returns the gas to its initial state A.

As the internal energy (U) depends only on the state of the system,its total change is zero in any  true cycle. So, the total work done by the engine in Carnot's cycle is equal tothe net quantity of heat it receives: (T1-T0) S.

For any simple hydrostatic system, that quantity would be the area enclosedby the loop which describes the system's evolution in the above S-T diagram(which gives the system's supposedlyuniform temperatureas a function of its entropy). This same mechanical work is also the area of the corresponding loop in the V-pdiagram (pressure as a function of volume). This latter viewpoint may look more practical,but it's far more obscure when it comes to discussing efficiency limits...

Efficiency of a Heat Engine

Carnot was primarily concerned withsteam enginesand the mechanical power which could be obtained from the fire heating upthe hot reservoir (the cold reservoir being provided from the surroundingsat "no cost", from a water stream or from atmospheric air). He thus defined the efficiency of an engine as the ratio of the work doneto the quantity of heat transferred from the hot source. For Carnot's ideal engine, the above shows that this ratio boils down tothe following quantity, known as Carnot's limit : Sadi Carnot  1796-1832

1   T/  T1

The unavoidable "waste" is the ratio of the extreme temperatures involved.

Refrigeration Efficiency

The primary purpose of a refrigerator (or an air-conditioning unit)is to extract heat from the cold source(to make it cooler).  Its efficiency is thus usefully defined as theratio of that heat to the mechanical power used to produce the transfer. So defined, the efficiency of a Carnot engine driven (backwards) as a refrigerator is:

( T/  T0   1 ) 1

This is [much] more than 100%, except forextreme refrigeration, which would divide by two or more  the ambient temperature above absolute zero.

Therated efficiency of commercialcooling units (the "coefficient of performance"COP) is somewhat lower, because it's defined in terms of the electrical power which drives the motor  (taking into account any wasted electrical energy).

Efficiency of a Heat Pump

A heat pump is driven like a refrigeration unit,but its useful output is the heat transferred to the hot side(to make it warmer). A little heat comes from the electrical power not converted intomechanical work, the rest is "pumped" at an efficiency which always exceeds(by far) 100% of the mechanical work.  For a Carnot engine, this latterefficiency is:

( 1   T/  T1 ) 1


(2006-09-12)  
The two flavors of state variables:extensive andintensive.

Thermodynamics is based on the statement (or belief) that almost all detailsabout large physical systems are irrelevant or impossible to describe. There would be no point in tracking individual molecules in a bottle ofgas, even if this was practical. Only a small number of statistical features are relevant.

A substantial part of thermodynamics need not even be based onstatistical physics.  Once the interesting quantities are identified,their mutual relations may not be obvious and they repay study... This global approach to thermodynamicsstarts with the notion of internal energy  (U)  and/orwith other thermodynamical potentials (H, F, G ...)  measured in energy units.

As the name implies, the variation of theinternal energy (U)  of a system isan accurate account of all forms of energy it exchanges with the rest of the Universe. In the simplest cases, the variation  (dU)  in internal energy boils down to themechanical work  (W)  done to the system andthe quantity of heat  (Q) which it receives.  Thefirst law of thermodynamicsthen reads:

dU   =   Q + W

U  is a function of the system'sstate variables,so its variation is a differential form of these  (as denoted by a straight "d")  whereas the same need not be trueof  Q  and  W, which may depend separately on other external conditions  (that's whatthe greek    is a reminder of).

A few exchanges of energy can be traced to obvious changes in quantities which arecalled extensive  (loosely speaking, a physical quantity is called extensive  whenthe measure of the whole is thesum of the measures of the parts). One example of an extensive quantity is the volume (V) of a system. A small change in an extensive quantity entails a proportional change in energy. The coefficient of proportionality is the associated intensive quantity.  The intensive quantity associated to volume is pressure...

W   =   pe dV Internal vs. External Pressure

That relation comes from the fact thatthe mechanical work done  by a force is the (scalar) product ofthat force by its displacement (i.e., the infinitesimal motion of the point which yields to it).

In the illustrated caseof a "system" consisting of the blue gas and the red piston, we must (at the very least) consider the kinetic energy of the piston,whose speed will change because of the net force which results fromany difference between theexternal pressure (pe)and theinternal pressure (p).

However, in the very special case of extremely slow changes (a quasistatic  transformation) the kinetic energy of the piston is utterly negligibleand the internal pressure (p) remains nearly equal to the slowly evolvingexternal pressure:

dU   =   p dV

Now, we can't give a general expression for  dU  valid for more generaltransformations unless some new extensive variable is involved (besides volume). Our intial explanation involving the piston's momentum is certainly valid (momentum is an extensive variable)  but it can't be the "final" one,since common experience shows that the piston will eventually stop. The piston's energy and/or momentum must have been "dissipated" into somethingcommonly called heat.

Could heat  itself be the extensive quantity involved in the energybalance of an infinitesimal energy balance for an irreversible transformation? The answer is a resounding no Benjamin Count Rumford  1753-1814 This misguided explanation would essentially be equivalent toconsidering heat as some kind of conserved fluid  (formerly dubbed "caloric"). The naive caloric theory  was first shown to be untenablebyRumford in 1798.

Thepioneering work of Carnot (1824)was only reconciled with thefirst law by Rudolf Clausiusin 1854, as he recognized the importance of the ratio Q/T of the quantity of heat  Q transferred at a certain temperature T. In 1865, this ratio was equated to a change  dS  in the relevant fundamentalextensive quantity for which Clausius himselfcoined the word entropy.

Likevolume (V) is associated withpressure (p),entropy (S) is associated with theintensive quantity called thermodynamical temperature  (T)  or"temperature above absolute zero", which can be defined as the reciprocalof the integrating factor  of heat... This is a linear function of (modern)customarymeasurements of temperature.  The SIunitofthermodynamical temperature is the kelvin (not capitalized). It's abbreviated K, and should not be used with the word "degree" or the "°" symbol (unlike the related "degree Celsius"which refers to a scale originating at the ice point,  at  0°C or  273.15 K, instead of the absolute zero  at  0 K  or  -273.15°C).

A system at temperature  T that receives a quantity of heat  Q (from an external source at temperature  T) undergoes a variation  dS  in its entropy:

dS   =   Q / T

Conversely, the source "receives" a quantity of heat Q  and its ownentropy varies by  Q / Te. The total change in entropy thus entailed is:

Q   ( 1/T 1/Te )

The second law of thermodynamic states that this is always a nonnegative quantity (total entropy never decreases)... The system can receive a positive quantity of heat (Q > 0) only from a warmer source  (Te > T).

A statistical definition of entropy (Boltzmann's relation)was first given byLudwig Boltzmann (1844-1906)in 1877, using the constant  k  now named after him. In 1948, Boltzmann's definition of entropy was properlygeneralized byClaude Shannon (1916-2001)in the newer context of information theory.

The general expression for infinitesimal transformations (reversible or not)in the case of an homogeneous gas (i.e., an hydrostatic  system)  is simply:

dU   =   T dS    p dV

In a quasistatic  transformation (p = pe) the two components of  dU  can be identified with heat transferred  and work done ( to  the system) :

Q  =  T dS             W  =  p dV

The above applies to an hydrostatic  system involving onlytwo extensive  quantities (entropy and volume) but it generalizes nicelyaccording to the following pattern which gives the variation (dU) in internal energyas a sum of the variations of several extensive quantities,each weighted by an associated intensive quantity:

Extensive:Entropy (S)VolumeAreaLengthElectric
charge
etc.
dU   =  T dS  p dV+  dA+  f dL+  dq+  ...
Intensive:TemperaturePressureSurface
tension
Tensile
force
Voltage 

For a system whose state at equilibrium is described by N extensivevariables (including entropy)the right-hand side of the above includes N terms (N=2 for an hydrostaticsystem).  This equation is the differential version of the relation which givesinternal energy in term of N variables, including entropy. That same relation may also be viewed as giving entropy in terms of N variables,including internal energy.  A state of equilibrium is characterized bya maximal  value of entropy.

Relativistic Thermodynamics: Total Hamiltonian Energy (E)

If the system is in (relativistic) motion at velocity v and momentum p, it may receive somemechanical work v.dp which translates into a change of its overall kinetic energy and, thus,of its total Hamiltonian energy  (E) :

E   =  v.p

Albert Einstein used that relation toestablish the basic variance of  E :

Hamiltonian Energy
E   =  E0
vinculum
space
vinculum
 1 v/c

On the other hand, the internal energy  U  is best defined  as follows.

U   =   E   v .p

This gives  U  and other thermodynamical potentials  (H, F, G, etc.)the same variance as a quantity of heat, a temperature or aLagrangian:

Internal Energy
Vinculum
   U   =   U0   1 v/c   

For more details about other equations of relativistic thermodynamics andthe historical controversies about them, seebelow.


(2005-06-14)    
Fromhere to certainty,entropy measures the lack of information.  ;-)

Consider an uncertain situation described by   distinctmutually exclusiveelementary events whose probabilities add up to 1. If the  n-th such event has probability pn , then Shannon  definedthe statistical entropy  S  as:

 
S ( p1, p2, ... , p )  =    
 
n = 1
 
  k pn  Log (pn)
 

In this, k is an arbitrary positive constant, which effectively definesthe unit of entropy. In physics, entropy is normally expressed in joules perkelvin (J/K),logarithms are natural logarithms,  and k is Boltzmann's constant. Besides other energy-to-temperature ratios of units,entropy may also be expressed in units of information, as discussedbelow.

Apparently, only two units of entropy have ever been given a specific name butneither became popular:  The boltzmann  isthe unit which makes  k  equal to unity in the above defining relation (one boltzmann  is about 1.38  J/K). The clausius  is a practical unit best defined as 1000thermochemicalcalories per degree Celsius, namely 4184 J/K  (exactly).

So defined, the statistical entropy  S  is nonnegative. It's minimal  (S = 0) when one of the elementary events is certain. For a given  ,  theentropy  S  is maximal when every elementary event has the same probability,in which case  S =  k Log ().

)  is known as Boltzmann's Relation. It's named afterLudwig Boltzmann(1844-1906) who introduced it in a contextwhere was the number of possible states within asmall interval of energy of fixed width, near equilibrium (where all statesare, indeed, nearly equiprobable).  As the width of such an intervalis arbitrary, an arbitrary additive quantity was involvedin Boltzmann's original definition  (beforequantum theory removed that ambiguity). Boltzmann's constant k = R/N is the ratio of the ideal gas constant (R) to Avogadro's number (N). Planck  named  k after Boltzmann.
 
According to Misha Gromov (2014 video) the above formula was first conceived by Planck, who derived it fromBoltzmann's equiprobable definition  (the two men corresponded on the subject).

Claude Elwood Shannon (1916-2001) made the following key formal remark:  Up to a positive factor,the above definition yields the only  nonnegativecontinuous function which can be consistently computed by splitting events intotwo sets and equating  S  to the sum  of three terms,namely the two-event entropy forthe split and the two conditional entropies weighted by the respectiveprobabilities of the splitting sets:

p   =   p1 + p2 + ... + pn
q=pn+1 + pn+2 + ... +p   =    1 p
S ( p1 ... p )=S (p,q)  + p S (p1/p ... pn/p)  + q S (pn+1/q ... p/q)


Unitsused in Computer Science and/or Information Theory

In information theory, the unit of entropy and/or information is thebit, namely the informationgiven by a single binary digit  (0 or 1). In theabove, this meansk = 1/Log(2).  [  k = 1if logarithms are understood to be in base 2, but "lg(x)" isa better notation for the binary logarithm of x, followingD.E. Knuth. ] When it's necessary to avoid the confusion between a binary digit (bit)and the information it conveys, this unit of information is best calleda shannon  (symbol Sh).

Other odd units of information are virtually unused. For the record, this includes the hartley  (symbol Hart)which is to a decimal digit what the shannon  (Sh) is to a binary digit;  the ratio of the former to the latter is:

Log(10) / Log(2)   =   lg(10)   =   3.32192809488736234787...

Therefore, 1 Hart is about 3.322 Sh. This unit, based on decimal logarithms, was named afterRalph V.L. Hartley (1888-1970,  of radio oscillator  fame) who introduced information as a physical quantity in 1927-1928 (more than 20 years before Claude Shannon's Information Theory).

The nat  or nit  is another unused unit of information (preferably called a boltzmann  as a physical unit of entropy) which is obtained by letting  k = 1  in the above defining equation, while retaining natural logarithms: One  nat  is about 1.44 Sh = 1 boltzmann (and a nit  is about 1.44 bits).

If you must know, abit (or, rather, ashannon) is about 9.57 24 J/K...A  picojoule per kelvin  (pJ/K) is about12gigabytes  (12.1646 GB).

The hoaxy brontobyte would be about  1.7668 J/K.


(2005-06-19)    
On the inaccessible state where both entropy and temperature are zero.

The definition ofstatistical entropy in absolute terms makes itclear that S would be zero only in some perfectly well-defined pure quantum state. Other physical definitions of entropy fail to be so specific and leaveopen the possibility that an arbitrary constant can be added to S.

The principle of Nernst (sometimes called the "third law" of thermodynamics)reconciles the two perspectives by stating that entropy must be zero atzero temperature.  Various forms of this law were stated byWalther Hermann Nernst (1864-1941;Nobel1920) between 1906 and 1912. 

A consequence of this statement is the fact that nothing can be cooled down to theabsolute zero of temperature (or else, there would be a prior cooling apparatuswith negative temperature and/or entropy). In the limited context of classical thermodynamics,the principle of Nernst thus justifies the very existence of the absolute zero,as a lower limit  for thermodynamic temperatures.

Violations of Nernst's Principle :

From a quantum viewpoint, the principle of Nernst would be rigorously trueonly if the ground state of every system was nondegenerate (i.e., if there was always only one quantum state of lowest energy). Although this is not the case, there are normally very few quantum states oflowest energy, among many other states whose energy isalmost as low. Therefore, the statistical entropy at zero temperature is always extremely small,even when it's not strictly equal to zero.

In practice, metastable  conditionspresent a much more annoying problem: For example, although crystals have a lower entropy than glasses, some glassestransform extremely slowly into crystals and mayappear absolutely stable... In such a case, a substance may be observed to retain a significant positive entropyat a temperature very near the absolute zero of temperature. This is only an apparent  violation of the principle of Nernst.


(2006-09-18)    
Ad hoc substitutes forinternal energy orentropy.

Thermodynamic potentials  are functions of the state of the systemobtained by subtracting from the internal energy  (U)  someproducts of conjugate quantities  (pairs of intensive and extensive quantities,like -p and V).

They have interesting physical interpretations in common circumstances. For example, under constant (atmospheric) pressure enthalpy  (H=U+pV) describes all energy exchanges except mechanical work. That's why chemists focus onchanges in enthalpy for chemical reactions, in order to rule out whatever irrelevant mechanical work is exchanged withthe atmosphere in a chemical explosion (involving a substantial change in V).

As illustrated belowfree enthalpy (Gibbs' function, denoted G)  is a convenient way to deal with a phase transition, since such a transformation leaves  G  unchanged,because it takes place at constant temperature and constant pressure. More generally, the difference infree enthalpy between two states of equilibriumis theleast amount ofuseful energy (excludingbothheat and pressure work)which the system must exchange with the outside to go from one state to the other.

 Battery Depletion One non-hydrostatic example is a battery of electromotive force  e (i.e., e  is the voltage at the electrodes when no currentflows)  andinternal resistance Ras it delivers a charge q in a time t.  The longer the time t, the closer we areto the quasistatic conditions which make the transfer of energy approachthe lower limit imposed by the change in G, according to the following inequality:

U   =  (VA - VB) i t   =  (Ri -e) q   >   -e q   =  G

ThermodynamicPotentials for an Hydrostatic System
NameExpressionDifferential FormMaxwell's Relation
Internal
Energy
 U  dU =   T dS p dV
  T S   =    p V
vinculumvinculum
VS
1
EnthalpyHU + pVdH =   T dS + V dp
  T S   =      V p
vinculumvinculum
pS
2
Free
Energy
FU TSdF = S dT p dV
  S T   =      p V
vinculumvinculum
VT
3
Free
Enthalpy
GH TSdG = S dT + V dp
  S T   =    V p
vinculumvinculum
pT
4

The tabulated differential relations are of the following mathematical form:

dz   =  (z/x) dx+ (z/y) dy

The matching Maxwell relations  in the last-column simply state that

2z /yx  = 2z /xy

Such statements may be mathematically trivial (Clairaut, 1740) but they are quite interesting physically... For example, from the equation of state  of a gas (i.e., the relation between its volume, temperature and pressure) the last two relations give the isothermal derivatives of entropy with respect to pressure or volume. This can be integrated to give an expression of entropy involvingparameters which are functions of temperature alone. (See example below  in thespecial case of a Van der Waals fluid.)


(2006-09-19)     ).
Heat capacities, compressibilities, thermal expansibilities, etc.

Thermal capacity is defined as the ratio of the heat received to theassociated increase in temperature. For an hydrostatic system, that quantity comes in two flavors: isobaric  (constant pressure) and isochoric (constant volume) :

Cp   =   T  S p  =    H p
vinculumvinculum
TT
CV   =   T  S V  =    U V
vinculumvinculum
TT

The difference between those two happens to be a quantity which can easilybe derived from the equation of state  (the relation linking  p, V and T):

Mayer's relation  ( for  molar heat capacities )
Cp CV  =   T  p V V p 
vinculumvinculum
TT
=    R    [ for one mole of anideal gas ]

Proof :  dS  =  S V  dT  +   S T dV
vinculumvinculum
TV
  S p =  S V    +   S T   V p
vinculumvinculumvinculumvinculum
TTVT
 Cp = CV   +   p V   V p
vinculumvinculumvinculumvinculum
TTTT

That last step uses the third relation of Maxwell.  QED

By definition, the adiabatic coefficient is the ratio    =  Cp / Cv (equal to  1+2/j  where j  is 3, 5 or 6 for a classical perfect gasobeyingJoule's law).

The adiabatic coefficient may also be expressed as other ratios of noteworthyquantities...

 Come back later, we're still working on this one...


(2013-01-15)    
The latter  must be used to compute the speed of sound.

Sound is a typical isentropic  phenomenon (i.e.,  for reasonnably smallintensities,sound is reversible and adiabatic). When quick vibrations are used to probe something, what we're feeling are isentropic  derivatives.

On the other hand, slow  measurements at room temperature allowthermal equilibria with the room at the beginning and the end of the observed transformation. In that case, we are measuring isothermal  coefficients.

Consider, for example, the stiffness  K  of a fluid or a solid (more precisely called bulk modulus of elasticity ). Its inverse  is the relative reduction  in volume caused by an increase  in pressure. It comes in two flavors:

Isothermal :
1    =    1  V T
vinculumvinculumvinculum
KTVp
 
Adiabatic :
1    =    1  V S
vinculumvinculumvinculum
KSVp
)  for the reciprocal of stiffness. The other two well-establishedelasticity coefficientsexpressed in the same units have unused reciprocals.

We'll need the volumetric  thermal expansion coefficient, defined by :

    =    1  V p
vinculumvinculum
VT

Here goes nothing  (Maxwell's fourth relation is used in the third lineand the fourth line is obtained by expanding the leading factor of the last term).

dV  =  V S  dp  +   V p dS
vinculumvinculum
pS
  V T =  V S    +   V p   S T
vinculumvinculumvinculumvinculum
ppSp
 -V = -V     V p   V p
vinculumvinculumvinculumvinculum
KTKSST
 1 = 1   +   T p   V p 
vinculumvinculumvinculumvinculum
KTKSST

   1    =    1  +  T 2   
vinculumvinculumvinculum
KTKSCp/ V

The quantity   Cp/ V   (which appears as the denominator of thelast term)  is the molar heat capacity per molar volume; it's also equalto the mass density    multiplied intothe specific heat capacity  c (note lowercase).

All told, that term is the inverse of a quantity  W  homogeneous to a pressure,an elasticity coefficient, or an energy density (more than 10 years ago,I proposed the term thermal wring  as a pretext for using the symbol  W,  which isn'toverloaded in this context):

    W    =    Cp    =     cp    =    KS    =     KT    =    KS KT   
vinculumvinculumvinculumvinculumvinculum
V T 2T 211KS KT
 


(2013-01-17)    
Adiabatic derivative of   Log T   with respect to   Log 1/V  (or  Log ).
 (orth )  to denote this Grüneisen parameter. I beg to differ  (to prevent confusion with the adiabatic coefficient).

         =      V   T S  =       T S   
vinculumvinculumvinculumvinculum
TVT

 is the adiabatic ratio of the logarithmic differentials  of two quantities: temperature and either  density or volume (those two differ by sign only).

The relation to the adiabatic coefficient   =  Cp / Cv =  Ks / KT   is simply:

  =   1  +  T

For condensed states of matter  (liquids or solids) the volumetric coefficient of thermal expansion  () is quite small and the above adiabatic coefficient remains very close to unity; the Grüneisen parameter is more meaningful (the adiabatic coefficient is traditionally reserved to the study ofgases).


 Johannes Diderik van der Waals  1837-1923(2006-09-18)    
A closed formula for the entropy of a Van der Waals  fluid.

For one mole of a Van der Waals gas, we have:

( p +a / V2 )  ( V-b )   =   RT

( p   a / V2 +  2a b / V 3 )  dV +  ( V-b ) dp   =   R dT

Let's combine this with the third relation of Maxwell :

  S T  =    p V   =   R
vinculumvinculumvinculum
VTV - b

Therefore,  S   =  f (T)  +  R Log (V-b)    for some function f

To be more definite, we resort tocalorimetric considerations,namely:

  S V  =    CV    =  f ' (T)
vinculumvinculum
TT

This shows that CV is a function of temperature alone. So, we may as well evaluate it for large molar volumes (very low pressure)  and find that:

CV   =  j/2 R

That relation comes from the fact that, at very low pressure, the energy of interactionbetween molecules is negligible.  Therefore, by the theorem of equipartition of energy, the entire energy of a gas is the energy which gets equally distributedamong the  j  active degrees of freedom of each molecule, including the3 translational degrees of freedom which areused to define  temperatureand  0, 2 or 3  rotational degrees of freedom (we assume the temperature is low enough for vibrations modes of the molecules tohave negligible effects; seebelow). All told: j = 3  for a monoatomic gas,j = 5  for a diatomic gas, j = 6  otherwise.

S   =   S0  +  j/2 R Log (T)  +  R Log (V-b)

There's no way to reconcile this expression with Nernst'sthird lawto make the entropy vanish at zero temperature.  That's because the domain of validityof the Van der Waals equation of state does not extend all theway down to zero temperature (there would presumably be a transition to asolid phase at low temperature, which is not accounted for by the model). So, we may as well accept the classical  view, which definesentropy only up to an additive constant andchoose the following expression (thestatistical definition of entropy, ultimatelybased onquantum considerations, leaves no such leeway).

Entropy of a Van der Waals fluid   ( j = 3, 5 or 6 )
S   =   R  Log [  T j /2  (V-b)  ]

Thus, the isentropic  equation of such a fluidgeneralizes one  of the formulations valid for anideal gas, when b = 0  and  j/2 =  :

T j /2  (V-b)   =   constant

Unlike  CV,  Cp = CV is not constant for a Van der Waals fluid,since:

Cp CV  =   T  p V V p 
vinculumvinculum
TT


(2012-07-03)  
The heat capacity per mole  is nearly the same (3R)  for all crystals.

In 1819, Dulong and Petit  (respectively the third and the secondholder of the chair of physics at Polytechnique in Paris, France)  jointly observed that the heat capacity of metalliccrystals is essentially proportional to their number of atoms. They found it to be nearly  25 J/K/mol  for every solid metalthey investigated  (this would have failed at very low temperatures).

In 1907,Albert Einstein gave asimplified quantum model,which explained qualitatively why the Dulong-Petit law fails at low temperature. His model also linked the empirical values of the molar  heat capacity (at high temperatures)  to theideal gas constant R :

3 R   =  24.943386(23)  J/K/mol

In 1912,Peter Debye devised an evenbetter model (equating the solid's vibrational modes with propagating phonons ) which is also  good at low temperatures. Its limited accuracy at intermediate temperatures is entirely due to the simplifying assumptionthat all phonons travel at the same speed. When applied to a gas of photons,  that statement is trueand the model then describes blackbody radiation perfectly,explainingPlanck's law !


wangshu(2010-12-19)  
What do the following energy levels contribute to the heatcapacity of carbon dioxide at 400 K?    En  =  (n+1) h  where  = 20 THz

The ratio  h / kT  =  2.4  indicates that the quanta involved are similar to the average energyof athermal photon (2.7 kT).

 Come back later, we're still working on this one...


(2005-06-25)    
Asentropy varies in a change of phase,someheat must be transferred.

The British chemistJosephBlack (1728-1799) is credited with the1754 discovery of fixed air  (carbon dioxide) which helpeddisprove the erroneous phlogiston  theory of combustion. James Watt (1736-1819)was once his pupil and his assistant. Around 1761, Black observed that a phase transition (e.g., from solid to liquid)  must be accompanied by a transfer of heat,  which is now called latent heat. In 1764, he first measured thelatent heat of steam.

Thelatent heat  L  is best described as the difference H in theenthalpy (H=U+pV) of the two phases, which accuratelyrepresents heat transferred under constant pressure (as this voids the second term in dH = TdS + Vdp).

Under constant pressure,phase transistion occurs at constant temperature. So, the free enthalpy (G=H-TS)  remains constant (as  dG = -SdT + Vdp).

Consider now how this free enthalpy  G  varies alongthe curve which gives the pressure  p as a function of the temperature  T when the two phases 1 and 2 coexist. Since  G  is the same on either side of this curve, we have:

dG   =   -S1 dT  +  V1 dp
dG   =   -S2 dT  +  V2 dp

Therefore,  dp/dT  is the ratio S/V of the change in entropy to the change in volumeentailed by the phase transition. Since  TS = H, we obtain:

  Emile Clapeyron  1799-1864
The Clausius-Clapeyron Relation :
T dp/dT   =  H / V  =   L / V

That relation is one of the nicest resultsof classical  thermodynamics.


(2020-01-17)  
Either diamond or graphite can be metastable in the domain of the other.

The boundary between the domains of graphite and diamond is well-known: In the plot of Log(p) as a fubction of T, tt's a straight line between the point at 0 K of temperature and 1.7 GPaof pressure and the diamond-graphite-liquid triple point (5000 K, 12 GPa).

Meeting at that triple-pont on either side or that boundary are two other straight linesbetween which one carbon allotrope is metastable  in the domain where the otheris stable.  Thus,  diamond is ordinarily metastable but wouldmorph into graphite at high temperature and low pressure.

 Phase diagram of carbon


(2016-08-01)    
Entropy doesn't increase when identical substances are mixed.

 Come back later, we're still working on this one...


(2016-08-01)    
Partial pressure is proportional to molar fraction.

 Come back later, we're still working on this one...

 Jacobus van 't Hoff
(2016-07-28)    
How an equilibrium changes when temperature varies.

 Come back later, we're still working on this one...


(2006-09-23)    
A flow expansion of areal gasmay cool it enough to liquefy it.

Joule Expansion  &  Inner Pressure

Expanding dS along dT and dV, the expression dU = T dS - p dV becomes:

dU  =  T  S V  dT  + T  S T  p dV
vinculumvinculum
TV
 =CV dT  + T  p V  p dV
vinculum
T

This gives the following expression (vanishing for a perfect gas) of theso-calledJoule coefficient which tellshow the temperature of a fluid varies when it undergoes a Joule expansion, where the internal energy (U) remains constant. An example of a Joule expansion is the removal of a separation between the gasand an empty chamber.

  T U   =   1  T  p V  p
vinculumvinculumvinculum
VCvT

 Johannes Diderik van der Waals  (1837-1923) earned a Nobel prize in 1910 The above square bracket is often called the internal pressure  or inner pressure. It's normally a positive quantity which repays study. Let's see what it amounts to in the case of aVan der Waals fluid;

  U T   =     T  p T  p   =  RT   p   =  a
vinculumvinculumvinculumvinculum
VTV-bV2

By integration, this yields: U = U0(T)a /V. Thelatent heat of liquefaction (L) is obtained in term of themolar volumes of the gaseous and liquid phases (VG,VL) either asH = U+pV  or as  TS (using theabove expression for S):

L   =  p (VG-VL) +a (1/VL-1/VG)  =   RT Log [ (VG-b) / (VL-b) ]

Joule-Thomson (Joule-Kelvin) Expansion Flow Process

 Joule-Kelvin Liquefier

The Joule-Kelvincoefficient ()  pertains to an isenthalpic expansion. Its value is obtained as above (from an expression of dH instead of dU):

  =   T H   =   1  T  V p  V  =   V  ( T 1 )
vinculumvinculumvinculumvinculum
pCpTCp

vanishes forperfect gasesbut allows an expansionflow process which can cool many real gasesenough to liquefy them, if the initial temperature is below theso-called inversion temperature, which makes   positive.

More precisely, the inversion temperature is a function of pressure. In the (T,p) diagram, there is a domain where isenthalpic decompression causes cooling. The boundary of that domain is called the inversion curve. In the example of a Van der Waals fluid, the equation of the inversion curve isobtained as follows:

0   =   T  V p   V   =    R T    V
vinculumvinculum
Tp   a / V2  + 2a b/ V3

This gives a relation which we may write next to the equation of state:

R T    =    p V   a / V  +  2a b/ V2
R T  +  p b=p V  + a / V    a b/ V2

By eliminating  V  between those two equations, we obtain a single relationwhich is best expressed in units of the critical point  (pc = 1,Tc = 1):

vinculum
T   =   15 / 4   +   p / 24      9 - p

If  T  is above  T (or below  T ) then decompression won't cool the gas.

At fairly low pressures, the inversion temperature  is approximately:

Ti   =   6.75 Tc

The ratio observed for most actual gases is lower than 6.75: Although it's  7.7  for helium,  it's only 6.1  for hydrogen, 5.2  for neon, 4.9  for oxygen or nitrogen,  and 4.8  for argon.

A Joule-Thomson cryogenic apparatus has no moving parts at low temperature (note that the cold but unliquefied part of the gas is returned in thermal contact withthe high-pressure intake gas, to pre-cool it before the expansion valve). William Thomson,  Baron Kelvin (1824-1907)

The effect was described in 1852 by William Thomson (before he becameLord Kelvin). So was the basic design, with the cooling countercurrent. Several such cryogenic devices can be "cascaded" so thatone liquefied gas is used to lower the intake temperature of the next apparatus... Liquid oxygen was obtained this way in 1877,byLouis Paul Cailletet (France)andRaoul Pierre Pictet (Switzerland). Hydrogen was first liquefied in 1898, by Sir James Dewar (1842-1923). Finally, helium  was liquefied in 1908, by the Dutch physicist Heike Kamerlingh Onnes  (1853-1926; Nobel 1913).

In 1895, the German engineerCarl von Linde (1842-1934)designed anair liquefaction machine based on this throttling process,which is now named after him  (Linde's method).


(2019-07-30)    
Reversible  thermoelectric phenomena.

The direct conversion of heat into electricity at the junction of two different metals wasdiscovered in 1794 by Alessandro Volta (1745-1827). This was rediscovered in 1821 by Thomas Johann Seebeck(1770-1830)  who observed only the ensuing deflection of a nearby compass needleand termed the effect thermomagneticØrsted  (who had discovered only a few monthsearlier that an electric current has magnetic effects)  realized thatan electric current was the proper cause and he wisely renamed theeffect thermoelectric,  which now stands.

The effect happens to be thermodynamically reversible so that if two junctions of unlike metals are connected in a circuit, then one of the is heated and the other is cooled (according to the direction of the current). The ensuing possibility of direct electric cooling was discovered in 1834by the French physicist Jean-Charles Peltier and the phenomenonis now called Peltier effect  in his honor.

 Come back later, we're still working on this one...


(2005-06-25)    
Heat is not the same "form of energy" as Hamiltonian energy.

We have introduced relativistic thermodynamicselsewherein the case of apointlike systemwhose rest mass  may vary. The fundamental relativistic distinction betweenthe total Hamiltonian energy (E)  and the internal energy (U)  was also heraldedabove.

Now, it should be clear from itsstatistical definitionthat entropy  is a relativistic invariant,since the probability of a well-defined spacetime event cannot depend on thespeed of whoever observes it. Mercifully, all  reputable authors agree on this one... They haven't always agreed on the following  (correct) formula for the temperature  T of a body moving at speed  v  whose temperature is T0  in its rest frame :

Mosengeil's Formula  (1906)
Vinculum
   T   =   T0   1 v/c   

The invariance of the entropy  S means that aquantity of heat (Q = T dS) transforms like the temperature  T. So do all the thermodynamic potentials,including internal energy  (U) free energy  (F = U-TS) Helmholtz'enthalpy  (H = U+pV) and Gibbs'free enthalpy  (G = H-TS)...

Vinculum
Q   =   Q0   1 v/c

Note that theideal gas law  (pV = RT) is invariant because the pressure  p  is invariant, whereas the temperature  T transforms like the volume  V. (The same remark would hold for any gas obeyingJoule's second law.)

One of several ways to justify the above expression for the temperature of a movingbody is to remark that thefrequency of a typical photon from a movingblackbody is proportional to its temperature. Thus, if it can be defined at all, temperature musttransform like a frequency. This viewpoint was first expounded in 1906 by Kurd von Mosengeil and adopted at once byPlanck,Hasenöhrl andEinstein(who would feel a misguided urge to recant, 45 years later).

In 1911, Ferencz Jüttner  (1878-1958)  retrieved the same formula for a moving gas,using a relativistic variant of an earlier argument ofHelmholtz.  He derived the relativistic speeddistribution function  recently confirmed numerically (2008-04-23) by Constantin Rasinariu  in the case of a 2-dimensional gas. (In his1964 paperentitled Wave-mechanical approach to relativistic thermodynamics,L. Gold gave a quantum version of Jüttner's argument.) Mosengeil's (correct) formula was also featured in the textbook published by Max von Laue  in 1924.

In 1967, under the supervision of Louis de BroglieAbdelmalek Guessous completed a full-blown attack,using Boltzmann's statistical mechanics (reviewed below). This left no doubt that thermodynamical temperature must indeed transform as stated above (in modern physics, other flavors of temperature are not welcome).

Equating heat with a form of energy was once amajor breakthrough,  but thefundamentalrelativistic distinctionbetween heat and Hamiltonian energynoted by most pioneers  (including Einstein in his youth) was butchered by others  (including Einstein in his old age) before its ultimate vindication...

A few introductions to Relativistic Thermodynamics :

In 1968,Pierre-V.Grosjean called the waves of controversies aboutMosengeil's formulathe temperature quarrel... In spite or because of its long history, that dispute is regularly revived by authors who keep discarding one fundamental subtlety of thermodynamics: Heat doesn't transform like a Hamiltonian  energy (which is the time-component of an energy-momentum 4-vector)  butlike aLagrangianMany  essays are thus going astray,  including:

  •   (1923)  by Arthur S. Eddington(1888-1944). Page 34 of the 1954 edition  (Cambridge).
  •   by Danilo Blanusa(1903-1987). Glasnik mat.-fiz i astr.,2  (1947).
  • last paper of Heinrich Ott (1894-1962). Zeitschrift für Physik 175, 70-104 (1963).
  • by Henri Arzeliès(1913-2003) and Nuovo Cimento 35, 792-804(1965).
  •   by A. Gamba. Nuovo Cimento 37, 1792-1794 (1965).
  •   (1967)  bythe Danish physicist Christian Møller (1904-1980).
  •   by Sean A. Hayward  (arXiv, 1999-01-22).
  • by W.Z. Jiang  (arXiv, 2003-05-14).
  •   by Isak Avramov  (2003).

  • by Bernhard Rothenstein  and Ioan Zaharie (Theoretics, 2003)

  • by Bernhard Rothenstein  andGeorge J. Spix  (arXiv, 2005-07-27).

  • by Manfred Requardt  (arXiv, 2008-01-17).
  •   by Phil Schewe  (AIP, 2007-10-19).

  • by C.A. Farías,  P.S. Moya  &  V.A. Pinto (arXiv, 2008-03-20).

  • by Julio Güémez (Physics Recearch International, 2011-04-05).

  • by Victor A. Pinto  &  Pablo S. Moya (Nature, 2017-12-15).

Schewe's article prompted a Physics Forums  discussion on2007-10-27. Related threads: 2007-06-29,2008-01-11,2008-04-14. Other threads include: 2007-10-24,2007-12-01,2013-05-02,etc.

I stand firmly by the statement that if temperature can be defined at all, it must obeyMosengeil's formula. The following articles argue against the premise of that conditionalproposition, at least for one type of thermometer:


  OnApril 14, 1967,Guessous defended his doctoral dissertation  (Recherches sur la thermodynamique relativiste). He established the relativistic thermodynamical temperature to be the reciprocal of the integrating factor of the quantity of heat,using the above statistical definition of entropy. This superior definition is shown, indeed, to becompatible withMosengeil's formula. The actual text isn't easy to skip through, because the author keeps waving the formulas heis arguing against (mainly the Ott-Arzeliès formulation, inaugurated by Eddington andruled out experimentally by P. Ammiraju in1984).
 
In 1970, Guessous published an expanded version of the thesis as abook entitled "Thermodynamique relativiste", prefaced by his famous adviser Louis de Broglie (Nobel 1929). That work is still quoted by some scholars,likeSimon Lévesque,fromCégepde Trois-Rivières (2007-01-13) although the subject is no longer fashionable.
 
The original dissertation of Abdelmalek Guessous appears verbatim asthe first five chapters of the book  (93 out of 305 pages).  Unfortunately, Guessous avoids contradicting  (formally, at least) what was established by his adviser for pointlike systems. This impairs some of the gems found in Chapter VI and beyond,because the author retains his early notations even as he shows them to be dubious. For example, the paramount definition of internal energyas a thermodynamical potential  (transforming like a Lagrangian) doesn't appear until Chapter VI where it's dubbed  U', since  U  was used throughout the thesis forthe Hamiltonian energy  E  (not clearly identified as such). More importantly, Guessous runs into the correct definitionof the inertial mass  (namely, the momentum to velocity ratio) but keeps calling it "renormalized mass" (denoted M' )  while awkwardly retaining the symbol M  as a mere name for  E/c (denoted  U/c  by him) which downplays the importance of the aforementioned true inertial mass m = M'.  So, Guessous missed  (albeit barely so)  the revolutionary expression of theinertia of energy for  N  interacting particles at a nonzerotemperature  T,  presentednextin the full glory of traditional notations consistent with the rest of this page.


(2008-10-07)    
The Hamiltonian energy E  is not  proportional to the inertial mass  M.

Here's the great formula which I obtained many years ago by pushing to their logicalconclusion some of the arguments presented in the 1969 addenda to the1967doctoral dissertation of Abdelmalek Guessous. I've kept this result of my younger self in pectore for far too long (I first read Guessous' work in 1973).

E    =     M c 2   N k T

We define the inertial mass  (M)  of a relativisticsystem of  N point-masses as the ratio of its total momentum p to the velocity v  of its center-of-mass:

p   =   Mv

It's not obvious that thedynamical momentum p  is actually proportional  to the velocity v  so that M  turns out to be simply a scalar quantity !

The description of a moving object of nonzero size always takes placeat constant time  in the frame  K  of the observer. The events which are part of such a description are simultaneous in  K but are usually not simultaneous  in the rest frame (K)  of the object. That viewpoint has been a basic tenet ofSpecialRelativity ever since Einstein showed in excruciating detailshow it explains theLorentz-Fitzgerald contraction,which makes the volume  V  of a moving solidappear smaller than its volume at rest  V:

Vinculum
V   =   V0   1 v/c

 Come back later, we're still working on this one...


(2008-10-13)    
Local temperature is higher on the outer parts of a rotating body.

 Come back later, we're still working on this one...


(2005-06-20)    
Each unit of area at the surface of a black body radiates a total powerproportional to the fourth power of its thermodynamic temperature.

This law was discovered experimentally in 1879, by the Slovene-Austrian physicistJoseph (Jozef) Stefan (1835-1893). It would be justified theoretically in 1884, by Stefan's most famous student: Ludwig Boltzmann (1844-1906).

The energy density (in J/m3 or Pa) of the thermalradiation inside an oven of thermodynamic temperature T (in K) isgiven by the following relation:

[ 4 / c ]   T 4    =    [ 7.566 10-16 Pa/K4 ]   T 4

On the other hand, each unit of area at the surface of a black bodyradiates away a power proportional to the fourth power of its temperature T. The coefficient of proportionality is Stefan's constant (which is also known as the Stefan-Boltzmann constant).  Namely:

  =  ( 2 5 k 4/ ( 15 h 3 c 3 )  =  5.6704 10-8 W/m2/K4

Those two statements are related. The latter can be derived from the former using the following argument,based on geometrical optics,  which merely assumes thatradiation escapes at a speed equal to Einstein'sconstant  (c).

One of the best physical approximations to an element of the surfaceof a "black" body is a small opening in the wall of a large cavity ("oven"). Indeed, any light entering such an opening will never be reflected directly. Whatever comes in is "absorbed", whatever comes out bears no relation whatsoevertoany feature of what was recently absorbed... The thing isblack in this precise physical sense.

 Come back later, we're still working on this one...

Luminosity of a Spherical Blackbody
L   =   4 T4 R2

To a good enough approximation, this formula relates the surface temperature T  of a star of radiant  absolute luminosity  L to its radius  R.


(2005-06-19)    
Several arguments would place anupper bound on temperature...

Several arguments have been proposed which would put a theoretical maximumto the thermodynamic temperature scale. This has been [abusively] touted as a "fourth law" of thermodynamics. Some arguments are obsolete, others are still debated within the latest contextof thestandard model of particle physics:

In 1973, D.C. Kelly argued that no temperature could ever exceed a limitof a trillion kelvins or so, because when particles are heated up, very highkinetic energies will be used to create new particle-antiparticle pairs ratherthan further contribute to an increase in the velocities of existing particles.Thus, adding energy will increase the total number of particles rather than thetemperature.

This quantum argument is predated by a semi-classical guess,featuring a rough quantitative agreement: In 1952, French physicist Yves Rocard (father of Michel Rocard, who was France's prime minister from 1988 to 1991) had argued that the density of electromagnetic energy ought not to exceed by muchits value at the surface of a "classical electron" (a uniformly charged sphere with a radius of about 2.81794 fm). Stefan's law would then imply an upper limit for temperatureon the order of what has since been dubbed "Rocard's temperature", namely:

3.4423K

One process seems capable of generating temperatureswell above Rocard's temperature:  theexplosionof a black hole viaHawking radiation.

Rocard's temperature would be that of a black hole of about8 1011 kg, which is much too smallto be created by the gravitational collapse of a star.Such a black hole could only be a "primordial" black hole,resulting from the hypotheticalcollapse of "original" irregularities (shortly after the big bang). Yet, the discussion below shows that a black hole whose temperature isRocard's temperature would radiate away its energy for a very long time:about 64 million times the age of the present Universe... It gets hotter as its gets smaller and older.

As the derivation we gave forStefan's Lawwas based on geometrical optics,it does not apply in the immediate vicinityof a black hole  (space curvature is importantand wavelengths need not be much shorter than the sizes involved). A careful analysis would show that a Schwarzschild black hole  absorbsphotons as if it was a massless black sphere (around which space is "flat") with aradius equal to a = 33 MG/c  (about 2.6 times the Schwarzschild radius). Thus, it emits like a black body of that size (obeying Stefan's law).  Its power output is:

P   =  ( 4a 2) T 4  =  108 M 2 G 2 T 4/ c 4

Using for  T  the temperature of a Schwarzschild black hole  of mass M:

P   =  9     h  c6 
vinculumvinculumvinculum
20480   M22 G2

As this entails a mass lossinversely proportional to the square of the mass, the cube  of the black hole's mass decreases at a constant rateof  27 / 20480 (in natural units). The black hole will thus evaporate completely after a time proportionalto the cube of its mass, the coefficient of proportionality being about 5.96 s  per cubic kilogram. A black hole of 2 million tonnes (2  kg)would therefore have a lifetime about equal tothe age of the Universe  (15 billion years).

Hawking  thus showed that his first hunch was not quite right: a black hole's area maydecrease steadilybecause of the radiation which does carry entropy away. The only absolute law is that, in this process like in any other,the total entropy of the Universe can only increase. There is little doubt that Hawking's computations are validdown to masses as small as a fraction of a gram. However, it must be invalid for masses of the order of the Planck mass (about 0.02 mg),as the computed temperature would otherwise be such that a "typical" photonwould carry away an energy  kT  equivalent to the black hole's total energy.

In his wonderful 1977 book The First Three Minutes, 1979 Nobel laureateSteven Weinberggives credit to R. Hagedorn, of the CERN laboratory in Geneva,for coming up with the idea of a maximum temperature in particle physics when anunlimited number of hadron species is allowed. Weinberg quotes work on the subjectby a number of theorists including Kerson Huang (of MIT) and himself, and statesthe "surprisingly low" maximum temperature of 2 000 000 000 000 K for the limit based on this idea...

However, in an afterword to the 1993 edition of his book,Weinberg points out that the "asymptotically free" theory of stronginteractions made the idea obsolete: a much hotter Universe would "simply"behave as a gas of quarks, leptons, and photons  (until unknown territory is foundin the vicinity of Planck's temperature).

In spite of these and other difficulties, there may be a maximum temperaturewell below Planck's temperature which is not attainable by any means, includingblack hole explosions: One guess is that newly created particles could form a hot shell aroundthe black hole which could radiate energy back into the black hole. The black hole would thus lose energy at a lesser rate,and would appear cooler and/or larger to a distant observer. The "fourth law" is not dead yet...


(2005-06-20)    
Like all perfect absorbers, black holes radiate with blackbody spectra.

The much-celebrated story of this fundamental discoverystarts with the original remark by Stephen W. Hawking (in November 1970) thatthe surface area of a black hole can never decrease. This law suggested that surface area is to a black hole what entropy  is to any other physical object.

Jacob Bekenstein (1947-2015) was then a postgraduate student working at Princeton under John Wheeler. He was the first to take this physical analogy seriously, before all themathematical evidence was in. Following Wheeler, Bekenstein remarked that black holes swallow the entropy ofwhatever falls into them. If thesecond law of thermodynamics is to holdin a Universe containing black holes, some entropy must be assigned to black holes. Bekenstein suggested that the entropy of a black hole was, in fact,proportional to its surface area...

At first, Hawking was upset by Bekenstein's "misuse" of his discovery,because it seemed obvious  that anything having an entropy wouldalso have a temperature, and that anything having a temperature would radiate awaysome of its energy. Since black holes were thought to be unable to let anythingescape  (including radiation)  they could not have a temperature or anentropy.  Or so it seemed for a while... However,  in 1973,  Hawking himself made acclaimed calculations confirming Bekenstein's hunch:

The entropy  (S) of a black hole is proportional
tothe area
  (A) of its event horizon.
S   =    k 2 c 3   ( ¼ A )
Vinculum
h G

What Stephen Hawking discovered is that quantum effects allow blackholes to radiate (and force them to do so). One ofseveralexplanatory pictures is based on the steady creations and anihilations ofparticle/antiparticle pairs in the vacuum, close to a black hole... Occasionally, a newly-born particle falls into the black hole before recombiningwith its sister, which then flies away as if  it had beendirectly emitted by the black hole. The work  spentin separating the particle from its antiparticle comesfrom the black hole itself,  which thus lost an energy equal to the mass-energy ofthe emitted  particle.

For a massive enough black hole, Stephen Hawking found the corresponding radiation spectrumto be that of a perfect blackbody  having a temperature proportional to the surface gravity  g  of the black hole (which is constant over the entire horizon of a stationary  black hole). In 1976,Bill Unruhgeneralized this proportionality between any gravitional field  g (or acceleration) and the temperature  T  of an associated heat bath.

Temperature  T is proportional to  (surface) gravity  g.
kT=g h / (42c)        [ Any coherent units ]
T=g / 2        [ In natural units ]

In the case of the simplest black hole (established by Karl Schwarzschild  as early as 1915) g  is  c/2R,  where  R is the Schwarzschild radius  equal to 2MG/c for a black hole of mass  M. In SI units,  kT  is about 1.694 / M.

Temperature of a Schwarzschild black hole :
kT=c 3(h/2) /(4G M)        [ Any coherent units ]
T=1 / M        [ In natural units ]

Rationalized Natural System of Units
c=1       [Einstein's constant ]
h=2       [Planck's constant ]
G=1 / 4       [gravitational constant ]
o=1       [magnetic constant ]
k=1       [Boltzmann's constant ]


(2005-06-13)    
Z is a sum over all possible quantum states:   Z() = exp ( E)
E is the energy of each state.  = 1/kT  is related to temperature.

A simplified didactic model: N independent spins

Consider a large number (N) of paramagnetic ions  whose locations in a crystalare sufficiently distant so interactions between them can be neglected. The whole thing is submitted to a uniform magnetic inductionB. Using the simplifying assumption that each ion behaves like an electron,its magnetic moment can only be measured to be aligned with the external fieldor directly opposite to it... Thus, each eigenvalue  E  of the [magnetic] energy of the ions can be expressed interms of  N  two-valued quantum numbers i = 1

 
E   =   B  
 
i =1
 
 i
 

is a constant(it would be equal to Bohr's magneton for electrons). As the partition function introduced above involves exponentials of such sums, whichare products of elementary factors, the entire sum boils down to:

Z()  =   exp ( E)  =  [ exp ( B)+ exp ( B) ] N

 Come back later, we're still working on this one...

borderborder
visits since September 20, 2006
 (c) Copyright 2000-2023, Gerard P. Michon, Ph.D.

[8]ページ先頭

©2009-2025 Movatter.jp