Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Maxwell–Boltzmann statistics

From Wikipedia, the free encyclopedia
Statistical distribution used in many-particle mechanics
Not to be confused withMaxwell–Boltzmann distribution.
Statistical mechanics
Maxwell–Boltzmann statistics can be used to derive theMaxwell–Boltzmann distribution of particle speeds in anideal gas. Shown: distribution of speeds for 106 oxygen molecules at -100 °C, 20 °C, and 600 °C.

Instatistical mechanics,Maxwell–Boltzmann statistics describes the distribution ofclassical material particles over various energy states inthermal equilibrium. It is applicable when the temperature is high enough or the particle density is low enough to render quantum effects negligible.

The expectednumber of particles with energyεi{\displaystyle \varepsilon _{i}} for Maxwell–Boltzmann statistics isNi=gie(εiμ)/kBT=NZgieεi/kBT,{\displaystyle \langle N_{i}\rangle ={\frac {g_{i}}{e^{(\varepsilon _{i}-\mu )/k_{\text{B}}T}}}={\frac {N}{Z}}\,g_{i}e^{-\varepsilon _{i}/k_{\text{B}}T},}where:

Equivalently, the number of particles is sometimes expressed asNi=1e(εiμ)/kBT=NZeεi/kBT,{\displaystyle \langle N_{i}\rangle ={\frac {1}{e^{(\varepsilon _{i}-\mu )/k_{\text{B}}T}}}={\frac {N}{Z}}\,e^{-\varepsilon _{i}/k_{\text{B}}T},}where the indexi now specifies a particular state rather than the set of all states with energyεi{\displaystyle \varepsilon _{i}}, andZ=ieεi/kBT{\textstyle Z=\sum _{i}e^{-\varepsilon _{i}/k_{\text{B}}T}}.

History

[edit]
Further information:Maxwell–Boltzmann distribution

Maxwell–Boltzmann statistics grew out of the Maxwell–Boltzmann distribution, most likely as a distillation of the underlying technique.[dubiousdiscuss] The distribution was first derived by Maxwell in 1860 on heuristic grounds. Boltzmann later, in the 1870s, carried out significant investigations into the physical origins of this distribution. The distribution can be derived on the ground that it maximizes the entropy of the system.

Relation with Maxwell–Boltzmann Distribution

[edit]

Maxwell–Boltzmann distribution and Maxwell–Boltzmann statistics are closely related. Maxwell–Boltzmann statistics is a more general principle in statistical mechanics that describes the probability of a classical particle being in a particular energy state:Pi=eEi/kBTZ{\displaystyle P_{i}={\frac {e^{-E_{i}/k_{\text{B}}T}}{Z}}}where:

Maxwell–Boltzmann distribution is a specific application of Maxwell–Boltzmann statistics to the kinetic energies of gas particles. The distribution of velocities (or speeds) of particles in an ideal gas follows from the statistical assumption that the energy levels of a gas molecule are given by its kinetic energy:f(v)=(m2πkBT)3/24πv2emv22kBT{\displaystyle f(v)=\left({\frac {m}{2\pi k_{\text{B}}T}}\right)^{3/2}4\pi v^{2}e^{-{\frac {mv^{2}}{2k_{\text{B}}T}}}}where:

Derivation

[edit]

We can deduce the Maxwell–Boltzmann distribution from Maxwell–Boltzmann statistics, starting with the Maxwell–Boltzmann probability for energy states and substituting the kinetic energyE=12mv2{\displaystyle E={\tfrac {1}{2}}mv^{2}} to express the probability in terms of velocity:P(E)=1Z exp(EkBT)P(v)=1Z exp(mv22kBT){\displaystyle {\begin{aligned}P(E)&={\frac {1}{Z}}~\exp \left({\frac {-E}{k_{\text{B}}T}}\right)\\\rightarrow P(v)&={\frac {1}{Z}}~\exp \left({\frac {-mv^{2}}{2k_{\text{B}}T}}\right)\end{aligned}}}

In 3D, this is proportional to the surface area of a sphere,4πv2{\displaystyle 4\pi v^{2}}. Thus, the probability density function (PDF) for speedv{\displaystyle v} becomes:f(v)=C4πv2exp(mv22kBT){\displaystyle f(v)=C\cdot 4\pi v^{2}\exp \left(-{\frac {mv^{2}}{2k_{\text{B}}T}}\right)}

To find the normalization constantC{\displaystyle C}, we require the integral of the probability density function over all possible speeds to be unity:0f(v)dv=1C04πv2exp(mv22kBT)dv=1{\displaystyle {\begin{aligned}\int _{0}^{\infty }f(v)\,dv&=1\\\rightarrow C\int _{0}^{\infty }4\pi v^{2}\exp \left(-{\frac {mv^{2}}{2k_{\text{B}}T}}\right)dv&=1\end{aligned}}}

Evaluating the integral using the known result0v2eav2dv=π4a3/2{\displaystyle \int _{0}^{\infty }v^{2}e^{-av^{2}}dv={\frac {\sqrt {\pi }}{4a^{3/2}}}}, witha=m2kBT{\displaystyle a={\frac {m}{2k_{\text{B}}T}}}, we obtain:C4ππ4(m2kBT)3/2=1C=(m2πkBT)3/2{\displaystyle {\begin{aligned}C\cdot 4\pi \cdot {\frac {\sqrt {\pi }}{4\left({\frac {m}{2k_{\text{B}}T}}\right)^{3/2}}}=1\quad \rightarrow C=\left({\frac {m}{2\pi k_{\text{B}}T}}\right)^{3/2}\end{aligned}}}

Therefore, the Maxwell–Boltzmann speed distribution is:f(v)=(m2πkBT)3/24πv2exp(mv22kBT){\displaystyle f(v)=\left({\frac {m}{2\pi k_{\text{B}}T}}\right)^{3/2}4\pi v^{2}\exp \left(-{\frac {mv^{2}}{2k_{\text{B}}T}}\right)}

Applicability

[edit]
Equilibrium thermal distributions for particles with integer spin (bosons), half integer spin (fermions), and classical (spinless) particles. Average occupancyn{\displaystyle \langle n\rangle } is shown versus energyϵ{\displaystyle \epsilon } relative to the system chemical potentialμ{\displaystyle \mu }, whereT{\displaystyle T} is the system temperature, andkB{\displaystyle k_{\text{B}}} is the Boltzmann constant.

Maxwell–Boltzmann statistics is used to derive theMaxwell–Boltzmann distribution of an ideal gas. However, it can also be used to extend that distribution to particles with a differentenergy–momentum relation, such as relativistic particles (resulting inMaxwell–Jüttner distribution), and to other than three-dimensional spaces.

Maxwell–Boltzmann statistics is often described as the statistics of "distinguishable" classical particles. In other words, the configuration of particleA in state 1 and particleB in state 2 is different from the case in which particleB is in state 1 and particleA is in state 2. This assumption leads to the proper (Boltzmann) statistics of particles in the energy states, but yields non-physical results for the entropy, as embodied in theGibbs paradox.

At the same time, there are no real particles that have the characteristics required by Maxwell–Boltzmann statistics. Indeed, the Gibbs paradox is resolved if we treat all particles of a certain type (e.g., electrons, protons, etc.) as principally indistinguishable. Once this assumption is made, the particle statistics change. The change in entropy in theentropy of mixing example may be viewed as an example of a non-extensive entropy resulting from the distinguishability of the two types of particles being mixed.

Quantum particles are either bosons (followingBose–Einstein statistics) or fermions (subject to thePauli exclusion principle, following insteadFermi–Dirac statistics). Both of these quantum statistics approach the Maxwell–Boltzmann statistics in the limit of high temperature and low particle density.

Derivations

[edit]

Maxwell–Boltzmann statistics can be derived in variousstatistical mechanical thermodynamic ensembles:[1]

In each case it is necessary to assume that the particles are non-interacting, and that multiple particles can occupy the same state and do so independently.

Derivation from microcanonical ensemble

[edit]
This sectionmay be too technical for most readers to understand. Pleasehelp improve it tomake it understandable to non-experts, without removing the technical details.(December 2013) (Learn how and when to remove this message)

Suppose we have a container with a huge number of very small particles all with identical physical characteristics (such as mass, charge, etc.). Let's refer to this as thesystem. Assume that though the particles have identical properties, they are distinguishable. For example, we might identify each particle by continually observing their trajectories, or by placing a marking on each one, e.g., drawing a different number on each one as is done withlottery balls.

The particles are moving inside that container in all directions with great speed. Because the particles are speeding around, they possess some energy. The Maxwell–Boltzmann distribution is a mathematical function that describes about how many particles in the container have a certain energy. More precisely, the Maxwell–Boltzmann distribution gives the non-normalized probability (this means that the probabilities do not add up to 1) that the state corresponding to a particular energy is occupied.

In general, there may be many particles with the same amount of energyε{\displaystyle \varepsilon }. Let the number of particles with the same energyε1{\displaystyle \varepsilon _{1}} beN1{\displaystyle N_{1}}, the number of particles possessing another energyε2{\displaystyle \varepsilon _{2}} beN2{\displaystyle N_{2}}, and so forth for all the possible energies{εii=1,2,3,}{\displaystyle \{\varepsilon _{i}\mid i=1,2,3,\ldots \}}. To describe this situation, we say thatNi{\displaystyle N_{i}} is theoccupation number of theenergy leveli.{\displaystyle i.} If we know all the occupation numbers{Nii=1,2,3,}{\displaystyle \{N_{i}\mid i=1,2,3,\ldots \}}, then we know the total energy of the system. However, because we can distinguish betweenwhich particles are occupying each energy level, the set of occupation numbers{Nii=1,2,3,}{\displaystyle \{N_{i}\mid i=1,2,3,\ldots \}} does not completely describe the state of the system. To completely describe the state of the system, or themicrostate, we must specify exactly which particles are in each energy level. Thus when we count the number of possible states of the system, we must count each and every microstate, and not just the possible sets of occupation numbers.

To begin with, assume that there is only one state at each energy leveli{\displaystyle i} (there is no degeneracy). What follows next is a bit of combinatorial thinking which has little to do in accurately describing the reservoir of particles. For instance, let's say there is a total ofk{\displaystyle k} boxes labelleda,b,,k{\displaystyle a,b,\ldots ,k}. With the concept ofcombination, we could calculate how many ways there are to arrangeN{\displaystyle N} into the set of boxes, where the order of balls within each box isn’t tracked. First, we selectNa{\displaystyle N_{a}} balls from a total ofN{\displaystyle N} balls to place into boxa{\displaystyle a}, and continue to select for each box from the remaining balls, ensuring that every ball is placed in one of the boxes. The total number of ways that the balls can be arranged isW=N!Na!(NNa)!×(NNa)!Nb!(NNaNb)!×(NNaNb)!Nc!(NNaNbNc)!××(NN)!Nk!(NNNk)!=N!Na!Nb!Nc!Nk!(NNaNNk)!{\displaystyle {\begin{aligned}W&={\frac {N!}{N_{a}!{\cancel {(N-N_{a})!}}}}\times {\frac {\cancel {(N-N_{a})!}}{N_{b}!{\cancel {(N-N_{a}-N_{b})!}}}}\times {\frac {\cancel {(N-N_{a}-N_{b})!}}{N_{c}!{\cancel {(N-N_{a}-N_{b}-N_{c})!}}}}\times \cdots \times {\frac {\cancel {(N-\cdots -N_{\ell })!}}{N_{k}!(N-\cdots -N_{\ell }-N_{k})!}}\\[8pt]&={\frac {N!}{N_{a}!N_{b}!N_{c}!\cdots N_{k}!(N-N_{a}-\cdots -N_{\ell }-N_{k})!}}\end{aligned}}}

As every ball has been placed into a box,(NNaNbNk)!=0!=1{\displaystyle (N-N_{a}-N_{b}-\cdots -N_{k})!=0!=1}, and we simplify the expression asW=N!=a,b,k1N!{\displaystyle W=N!\prod _{\ell =a,b,\ldots }^{k}{\frac {1}{N_{\ell }!}}}

This is just themultinomial coefficient, the number of ways of arrangingN items intok boxes, thelth box holdingNl items, ignoring the permutation of items in each box.

Now, consider the case where there is more than one way to putNi{\displaystyle N_{i}} particles in the boxi{\displaystyle i} (i.e. taking the degeneracy problem into consideration). If thei{\displaystyle i}th box has a "degeneracy" ofgi{\displaystyle g_{i}}, that is, it hasgi{\displaystyle g_{i}} "sub-boxes" (gi{\displaystyle g_{i}} boxes with the same energyεi{\displaystyle \varepsilon _{i}}. These states/boxes with the same energy are called degenerate states.), such that any way of filling thei{\displaystyle i}th box where the number in the sub-boxes is changed is a distinct way of filling the box, then the number of ways of filling theith box must be increased by the number of ways of distributing theNi{\displaystyle N_{i}} objects in thegi{\displaystyle g_{i}} "sub-boxes". The number of ways of placingNi{\displaystyle N_{i}} distinguishable objects ingi{\displaystyle g_{i}} "sub-boxes" isgiNi{\displaystyle g_{i}^{N_{i}}} (the first object can go into any of thegi{\displaystyle g_{i}} boxes, the second object can also go into any of thegi{\displaystyle g_{i}} boxes, and so on). Thus the number of waysW{\displaystyle W} that a total ofN{\displaystyle N} particles can be classified into energy levels according to their energies, while each leveli{\displaystyle i} havinggi{\displaystyle g_{i}} distinct states such that theith level accommodatesNi{\displaystyle N_{i}} particles is:W=N!igiNiNi!{\displaystyle W=N!\prod _{i}{\frac {g_{i}^{N_{i}}}{N_{i}!}}}

This is the form forW first derived byBoltzmann. Boltzmann's fundamental equationS=kBlnW{\displaystyle S=k_{\text{B}}\,\ln W} relates the thermodynamicentropyS to the number of microstatesW, wherekB is theBoltzmann constant. It was pointed out byGibbs however, that the above expression forW does not yield anextensive entropy, and is therefore faulty. This problem is known as theGibbs paradox. The problem is that the particles considered by the above equation are notindistinguishable. In other words, for two particles (A andB) in two energy sublevels the population represented by [A,B] is considered distinct from the population [B,A] while for indistinguishable particles, they are not. If we carry out the argument for indistinguishable particles, we are led to theBose–Einstein expression forW:W=i(Ni+gi1)!Ni!(gi1)!{\displaystyle W=\prod _{i}{\frac {(N_{i}+g_{i}-1)!}{N_{i}!(g_{i}-1)!}}}

The Maxwell–Boltzmann distribution follows from this Bose–Einstein distribution for temperatures well above absolute zero, implying thatgi1{\displaystyle g_{i}\gg 1}. The Maxwell–Boltzmann distribution also requires low density, implying thatgiNi{\displaystyle g_{i}\gg N_{i}}. Under these conditions, we may useStirling's approximation for the factorial:N!NNeN,{\displaystyle N!\approx N^{N}e^{-N},}to write:W=igiNi+gi(Ni+gi)!Ni!gi!i(Ni+gi)!Ni!gi!i(Ni+gi)Ni+gieNigiNi!gigiegi=igiNi(1+Ni/gi)Ni+gieNiNi!{\displaystyle {\begin{aligned}W&=\prod _{i}{\frac {g_{i}}{N_{i}+g_{i}}}{\frac {(N_{i}+g_{i})!}{N_{i}!g_{i}!}}\approx \prod _{i}{\frac {(N_{i}+g_{i})!}{N_{i}!g_{i}!}}\\&\approx \prod _{i}{\frac {(N_{i}+g_{i})^{N_{i}+g_{i}}e^{-N_{i}-g_{i}}}{N_{i}!g_{i}^{g_{i}}e^{-g_{i}}}}=\prod _{i}{\frac {g_{i}^{N_{i}}(1+N_{i}/g_{i})^{N_{i}+g_{i}}e^{-N_{i}}}{N_{i}!}}\end{aligned}}}

Using the fact that(1+Ni/gi)Ni+gieNi{\displaystyle (1+N_{i}/g_{i})^{N_{i}+g_{i}}\approx e^{N_{i}}} forgiNi{\displaystyle g_{i}\gg N_{i}} we get:WigiNiNi!{\displaystyle W\approx \prod _{i}{\frac {g_{i}^{N_{i}}}{N_{i}!}}}

This is essentially a division byN! of Boltzmann's original expression forW, and this correction is referred to ascorrect Boltzmann counting.

We wish to find theNi{\displaystyle N_{i}} for which the functionW{\displaystyle W} is maximized, while considering the constraint that there is a fixed number of particles(N=Ni){\textstyle \left(N=\sum N_{i}\right)} and a fixed energy(E=Niεi){\textstyle \left(E=\sum N_{i}\varepsilon _{i}\right)} in the container. The maxima ofW{\displaystyle W} andln(W){\displaystyle \ln(W)} are achieved by the same values ofNi{\displaystyle N_{i}} and, since it is easier to accomplish mathematically, we will maximize the latter function instead. We constrain our solution usingLagrange multipliers forming the function:f(N1,N2,,Nn)=ln(W)+α(NiNi)+β(EiNiεi){\displaystyle f(N_{1},N_{2},\ldots ,N_{n})=\textstyle \ln(W)+\alpha (N-\sum _{i}N_{i})+\beta (E-\sum _{i}N_{i}\varepsilon _{i})}lnW=ln[i=1ngiNiNi!]i=1n(NilngiNilnNi+Ni){\displaystyle \ln W=\ln \left[\prod _{i=1}^{n}{\frac {g_{i}^{N_{i}}}{N_{i}!}}\right]\approx \sum _{i=1}^{n}\left(N_{i}\ln g_{i}-N_{i}\ln N_{i}+N_{i}\right)}

Finallyf(N1,N2,,Nn)=αN+βE+i=1n[NilngiNilnNi+Ni(α+βεi)Ni]{\displaystyle f(N_{1},N_{2},\ldots ,N_{n})=\alpha N+\beta E+\sum _{i=1}^{n}\left[N_{i}\ln g_{i}-N_{i}\ln N_{i}+N_{i}-\left(\alpha +\beta \varepsilon _{i}\right)N_{i}\right]}

In order to maximize the expression above we applyFermat's theorem (stationary points), according to which local extrema, if exist, must be at critical points (partial derivatives vanish):fNi=lngilnNi(α+βεi)=0{\displaystyle {\frac {\partial f}{\partial N_{i}}}=\ln g_{i}-\ln N_{i}-(\alpha +\beta \varepsilon _{i})=0}

By solving the equations above (i=1n{\displaystyle i=1\ldots n}) we arrive to an expression forNi{\displaystyle N_{i}}:Ni=gieα+βεi{\displaystyle N_{i}={\frac {g_{i}}{e^{\alpha +\beta \varepsilon _{i}}}}}

Substituting this expression forNi{\displaystyle N_{i}} into the equation forlnW{\displaystyle \ln W} and assuming thatN1{\displaystyle N\gg 1} yields:lnW=(α+1)N+βE{\displaystyle \ln W=(\alpha +1)N+\beta E\,}or, rearranging:E=lnWβNβαNβ{\displaystyle E={\frac {\ln W}{\beta }}-{\frac {N}{\beta }}-{\frac {\alpha N}{\beta }}}

Boltzmann realized that this is just an expression of theEuler-integrated fundamental equation of thermodynamics. IdentifyingE as the internal energy, the Euler-integrated fundamental equation states that :E=TSPV+μN{\displaystyle E=TS-PV+\mu N}whereT is thetemperature,P is pressure,V isvolume, andμ is thechemical potential. Boltzmann's equationS=kBlnW{\displaystyle S=k_{\text{B}}\ln W} is the realization that the entropy is proportional tolnW{\displaystyle \ln W} with the constant of proportionality being theBoltzmann constant. Using the ideal gas equation of state (PV = NkBT), It follows immediately thatβ=1/kBT{\displaystyle \beta =1/k_{\text{B}}T} andα=μ/kBT{\displaystyle \alpha =-\mu /k_{\text{B}}T} so that the populations may now be written:Ni=gie(εiμ)/(kBT){\displaystyle N_{i}={\frac {g_{i}}{e^{(\varepsilon _{i}-\mu )/(k_{\text{B}}T)}}}}

Note that the above formula is sometimes written:Ni=gieεi/kBT/z{\displaystyle N_{i}={\frac {g_{i}}{e^{\varepsilon _{i}/k_{\text{B}}T}/z}}}wherez=exp(μ/kBT){\displaystyle z=\exp(\mu /k_{\text{B}}T)} is the absoluteactivity.

Alternatively, we may use the fact thatiNi=N{\displaystyle \sum _{i}N_{i}=N}to obtain the population numbers asNi=Ngieεi/kBTZ{\displaystyle N_{i}=N{\frac {g_{i}e^{-\varepsilon _{i}/k_{\text{B}}T}}{Z}}}whereZ is thepartition function defined by:Z=igieεi/kBT{\displaystyle Z=\sum _{i}g_{i}e^{-\varepsilon _{i}/k_{\text{B}}T}}

In an approximation whereεi is considered to be a continuous variable, theThomas–Fermi approximation yields a continuous degeneracyg proportional toε{\displaystyle {\sqrt {\varepsilon }}} so that:εeε/kT0εeε/kT=εeε/kTπ2(kBT)3/2=2εeε/kTπ(kBT)3{\displaystyle {\frac {{\sqrt {\varepsilon }}\,e^{-\varepsilon /kT}}{\int _{0}^{\infty }{\sqrt {\varepsilon }}\,e^{-\varepsilon /kT}}}={\frac {{\sqrt {\varepsilon }}\,e^{-\varepsilon /kT}}{{\frac {\sqrt {\pi }}{2}}(k_{\text{B}}T)^{3/2}}}={\frac {2{\sqrt {\varepsilon }}\,e^{-\varepsilon /kT}}{\sqrt {\pi (k_{\text{B}}T)^{3}}}}}which is just theMaxwell–Boltzmann distribution for the energy.

Derivation from canonical ensemble

[edit]
This sectionmay be too technical for most readers to understand. Pleasehelp improve it tomake it understandable to non-experts, without removing the technical details.(December 2013) (Learn how and when to remove this message)

In the above discussion, the Boltzmann distribution function was obtained via directly analysing the multiplicities of a system. Alternatively, one can make use of thecanonical ensemble. In a canonical ensemble, a system is in thermal contact with a reservoir. While energy is free to flow between the system and the reservoir, the reservoir is thought to have infinitely large heat capacity as to maintain constant temperature,T, for the combined system.

In the present context, our system is assumed to have the energy levelsεi{\displaystyle \varepsilon _{i}} with degeneraciesgi{\displaystyle g_{i}}. As before, we would like to calculate the probability that our system has energyεi{\displaystyle \varepsilon _{i}}.

If our system is in states1{\displaystyle s_{1}}, then there would be a corresponding number of microstates available to the reservoir. Call this numberΩR(s1){\displaystyle \Omega _{\text{R}}(s_{1})}. By assumption, the combined system (of the system we are interested in and the reservoir) is isolated, so all microstates are equally probable. Therefore, for instance, ifΩR(s1)=2ΩR(s2){\displaystyle \Omega _{\text{R}}(s_{1})=2\;\Omega _{\text{R}}(s_{2})}, we can conclude that our system is twice as likely to be in states1{\displaystyle s_{1}} thans2{\displaystyle s_{2}}. In general, ifP(si){\displaystyle P(s_{i})} is the probability that our system is in statesi{\displaystyle s_{i}},P(s1)P(s2)=ΩR(s1)ΩR(s2).{\displaystyle {\frac {P(s_{1})}{P(s_{2})}}={\frac {\Omega _{\text{R}}(s_{1})}{\Omega _{\text{R}}(s_{2})}}.}

Since theentropy of the reservoirSR=klnΩR{\displaystyle S_{\text{R}}=k\ln \Omega _{\text{R}}}, the above becomesP(s1)P(s2)=eSR(s1)/keSR(s2)/k=e(SR(s1)SR(s2))/k.{\displaystyle {\frac {P(s_{1})}{P(s_{2})}}={\frac {e^{S_{\text{R}}(s_{1})/k}}{e^{S_{\text{R}}(s_{2})/k}}}=e^{(S_{\text{R}}(s_{1})-S_{\text{R}}(s_{2}))/k}.}

Next we recall the thermodynamic identity (from thefirst law of thermodynamics andsecond law of thermodynamics):dSR=1T(dUR+PdVRμdNR).{\displaystyle dS_{\text{R}}={\frac {1}{T}}(dU_{\text{R}}+P\,dV_{\text{R}}-\mu \,dN_{\text{R}}).}

In a canonical ensemble, there is no exchange of particles, so thedNR{\displaystyle dN_{\text{R}}} term is zero. Similarly,dVR=0{\displaystyle dV_{\text{R}}=0}. This givesSR(s1)SR(s2)=1T(UR(s1)UR(s2))=1T(E(s1)E(s2)),{\displaystyle S_{\text{R}}(s_{1})-S_{\text{R}}(s_{2})={\frac {1}{T}}(U_{\text{R}}(s_{1})-U_{\text{R}}(s_{2}))=-{\frac {1}{T}}(E(s_{1})-E(s_{2})),}whereUR(si){\displaystyle U_{\text{R}}(s_{i})} andE(si){\displaystyle E(s_{i})} denote the energies of the reservoir and the system atsi{\displaystyle s_{i}}, respectively. For the second equality we have used the conservation of energy. Substituting into the first equation relatingP(s1),P(s2){\displaystyle P(s_{1}),\;P(s_{2})}:P(s1)P(s2)=eE(s1)/kBTeE(s2)/kBT,{\displaystyle {\frac {P(s_{1})}{P(s_{2})}}={\frac {e^{-E(s_{1})/k_{\text{B}}T}}{e^{-E(s_{2})/k_{\text{B}}T}}},}which implies, for any states of the systemP(s)=1ZeE(s)/kBT,{\displaystyle P(s)={\frac {1}{Z}}e^{-E(s)/k_{\text{B}}T},}whereZ is an appropriately chosen "constant" to make total probability 1. (Z is constant provided that the temperatureT is invariant.)Z=seE(s)/kBT,{\displaystyle Z=\sum _{s}e^{-E(s)/k_{\text{B}}T},}where the indexs runs through all microstates of the system.Z is sometimes called the Boltzmannsum over states (or "Zustandssumme" in the original German). If we index the summation via the energy eigenvalues instead of all possible states, degeneracy must be taken into account. The probability of our system having energyεi{\displaystyle \varepsilon _{i}} is simply the sum of the probabilities of all corresponding microstates:P(εi)=1Zgieεi/kBT{\displaystyle P(\varepsilon _{i})={\frac {1}{Z}}g_{i}e^{-\varepsilon _{i}/k_{\text{B}}T}}where, with obvious modification,Z=jgjeεj/kBT,{\displaystyle Z=\sum _{j}g_{j}e^{-\varepsilon _{j}/k_{\text{B}}T},}this is the same result as before.

Comments on this derivation:

  • Notice that in this formulation, the initial assumption "... suppose the system has totalN particles ..." is dispensed with. Indeed, the number of particles possessed by the system plays no role in arriving at the distribution. Rather, how many particles would occupy states with energyεi{\displaystyle \varepsilon _{i}} follows as an easy consequence.
  • What has been presented above is essentially a derivation of the canonical partition function. As one can see by comparing the definitions, the Boltzmann sum over states is equal to the canonical partition function.
  • Exactly the same approach can be used to deriveFermi–Dirac andBose–Einstein statistics. However, there one would replace the canonical ensemble with thegrand canonical ensemble, since there is exchange of particles between the system and the reservoir. Also, the system one considers in those cases is a single particlestate, not a particle. (In the above discussion, we could have assumed our system to be a single atom.)

Derivation from canonical ensemble

[edit]

The Maxwell-Boltzmann distribution describes the probability of a particle occupying an energy stateE in a classical system. It takes the following form:fMB,high(E)=exp(EEFkBT),for EEFfMB,low(E)=1exp(EEFkBT),for EEF{\displaystyle {\begin{aligned}f_{\text{MB,high}}(E)&=\exp \left(-{\frac {E-E_{\text{F}}}{k_{\text{B}}T}}\right),&{\text{for }}E\gg E_{\text{F}}\\f_{\text{MB,low}}(E)&=1-\exp \left({\frac {E-E_{\text{F}}}{k_{\text{B}}T}}\right),&{\text{for }}E\ll E_{\text{F}}\end{aligned}}}

For a system of indistinguishable particles, we start with the canonical ensemble formalism.

In a system with energy levels{Ei}{\displaystyle \{E_{i}\}}, letni{\displaystyle n_{i}} be the number of particles in statei. The total energy and particle number are:Etotal=iniEiN=ini{\displaystyle {\begin{aligned}E_{\text{total}}&=\sum _{i}n_{i}E_{i}\\N&=\sum _{i}n_{i}\end{aligned}}}

For a specific configuration{ni}{\displaystyle \{n_{i}\}}, the probability in the canonical ensemble is:P({ni})=1ZNN!ini!i(eβEi)ni{\displaystyle P(\{n_{i}\})={\frac {1}{Z_{N}}}{\frac {N!}{\prod _{i}n_{i}!}}\prod _{i}(e^{-\beta E_{i}})^{n_{i}}}

The factorN!ini!{\displaystyle {\frac {N!}{\prod _{i}n_{i}!}}} accounts for the number of ways to distributeN indistinguishable particles among the states.

For Maxwell–Boltzmann statistics, we assume that the average occupation number of any state is much less than 1 (ni1{\displaystyle \langle n_{i}\rangle \ll 1}), which leads to:nieβ(Eiμ){\displaystyle \langle n_{i}\rangle \approx e^{-\beta (E_{i}-\mu )}}whereμ{\displaystyle \mu } is the chemical potential determined byini=N{\displaystyle \textstyle \sum _{i}\langle n_{i}\rangle =N}.

For energy states near the Fermi energyEF{\displaystyle E_{\text{F}}}, we can expressμEF{\displaystyle \mu \approx E_{\text{F}}}, giving:fMB(E)=e(EEF)/kBT{\displaystyle f_{\text{MB}}(E)=e^{-(E-E_{\text{F}})/k_{\text{B}}T}}

For high energies (EEF{\displaystyle E\gg E_{\text{F}}}), this directly gives:fMB,high(E)=e(EEF)/kBT{\displaystyle f_{\text{MB,high}}(E)=e^{-(E-E_{\text{F}})/k_{\text{B}}T}}

For low energies (EEF{\displaystyle E\ll E_{\text{F}}}), using the approximationex1x{\displaystyle e^{-x}\approx 1-x} for smallx:fMB,low(E)1e(EEF)/kBT{\displaystyle f_{\text{MB,low}}(E)\approx 1-e^{(E-E_{\text{F}})/k_{\text{B}}T}}

This is the derivation of the Maxwell–Boltzmann distribution in both energy regimes.

See also

[edit]

Notes

[edit]
  1. ^For example, two simple point particles may have the same energy, but different momentum vectors. They may be distinguished from each other on this basis, and the degeneracy will be the number of possible ways that they can be so distinguished.

References

[edit]
  1. ^Tolman, R. C. (1938).The Principles of Statistical Mechanics.Dover Publications.ISBN 9780486638966.{{cite book}}:ISBN / Date incompatibility (help)

Bibliography

[edit]
  • Carter, Ashley H., "Classical and Statistical Thermodynamics", Prentice–Hall, Inc., 2001, New Jersey.
  • Raj Pathria, "Statistical Mechanics", Butterworth–Heinemann, 1996.
Theory
Statistical thermodynamics
Models
Mathematical approaches
Critical phenomena
Entropy
Applications
Retrieved from "https://en.wikipedia.org/w/index.php?title=Maxwell–Boltzmann_statistics&oldid=1315994370"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp