There are two problems in the attached file…
This is an unformatted preview. Please download the attached document for the original format. 2. Classical Gases Our goal in this section is to use the techniques of statistical mechanics to describe the dynamics of the simplest system: a gas. This means a bunch of particles, ?ying around in a box. Although much of the last section was formulated in the language of quantum mechanics, here we will revert back to classical mechanics. Nonetheless, a recurrent theme will be that the quantum world is never far behind: we’ll see several puzzles, both theoretical and experimental, which can only truly be resolved by turning on . 2.1 The Classical Partition Function For most of this section we will work in the canonical ensemble. We start by reformulating the idea of a partition function in classical mechanics. We’ll consider a simple system – a single particle of mass m moving in three dimensions in a potential V (q). The classical Hamiltonian of the system3 is the sum of kinetic and potential energy, p2 + V (q) 2m We earlier de?ned the partition function (1.21) to be the sum over all quantum states of the system. Here we want to do something similar. In classical mechanics, the state of a system is determined by a point in phase space. We must specify both the position and momentum of each of the particles — only then do we have enough information to ?gure out what the system will do for all times in the future. This motivates the de?nition of the partition function for a single classical particle as the integration over phase space, 1 (2.1) Z1 = 3 d3 qd3 p e??H(p,q) h The only slightly odd thing is the factor of 1/h3 that sits out front. It is a quantity that needs to be there simply on dimensional grounds: Z should be dimensionless so h must have dimension (length × momentum) or, equivalently, Joules-seconds (Js). The actual value of h won’t matter for any physical observable, like heat capacity, because we always take log Z and then di?erentiate. Despite this, there is actually a correct value for h: it is Planck’s constant, h = 2? ? 6.6 × 10?34 Js. H= It is very strange to see Planck’s constant in a formula that is supposed to be classical. What’s it doing there? In fact, it is a vestigial object, like the male nipple. It is redundant, serving only as a reminder of where we came from. And the classical world came from the quantum. 3 If you haven’t taken the Classical Dynamics course, you should think of the Hamiltonian as the energy of the system expressed in terms of the position and momentum of the particle. More details can be found in the lecture notes at: http://www.damtp.cam.ac.uk/user/tong/dynamics.html – 31 – 2.1.1 From Quantum to Classical It is possible to derive the classical partition function (2.1) directly from the quantum partition function (1.21) without resorting to hand-waving. It will also show us why the factor of 1/h sits outside the partition function. The derivation is a little tedious, but worth seeing. (Similar techniques are useful in later courses when you ?rst meet the path integral). To make life easier, let’s consider a single particle moving in one spatial dimension. It has position operator q , momentum operator p and Hamiltonian, ˆ ˆ p2 ˆ ˆ H= + V (ˆ) q 2m If |n is the energy eigenstate with energy En , the quantum partition function is ˆ e??En = Z1 = n n|e?? H |n n (2.2) In what follows, we’ll make liberal use of the fact that we can insert the identity operator anywhere in this expression. Identity operators can be constructed by summing over any complete basis of states. We’ll need two such constructions, using the position eigenvectors |q and the momentum eigenvectors |p , dq |q q| 1= , dq |p p| 1= We start by inserting two copies of the identity built from position eigenstates, Z1 = ˆ n ˆ dqdq ? q|e?? H |q ? = But now we can replace q ? |q = ?(q ? ? q), to get n dq ? |q ? q ? |n dq |q q|e?? H n| n q ? |n n|q |n n| with the identity matrix and use the fact that Z1 = ˆ dq q|e?? H |q (2.3) We see that the result is to replace the sum over energy eigenstates in (2.2) with a sum (or integral) over position eigenstates in (2.3). If you wanted, you could play the same game and get the sum over any complete basis of eigenstates of your choosing. As an aside, this means that we can write the partition function in a basis independent fashion as ˆ Z1 = Tr e?? H – 32 – So far, our manipulations could have been done for any quantum system. Now we want to use the fact that we are taking the classical limit. This comes about when we try ˆ to factorize e?? H into a momentum term and a position term. The trouble is that this isn’t always possible when there are matrices (or operators) in the exponent. Recall that, ˆ ˆ ˆ 1 ˆ ˆ ˆ eA eB = eA+B+ 2 [A,B]+... For us [ˆ, p] = i . This means that if we’re willing to neglect terms of order q ˆ is the meaning of taking the classical limit — then we can write ˆ — which 2 ˆ q e?? H = e?? p /2m e??V (ˆ) + O( ) We can now start to replace some of the operators in the exponent, like V (ˆ), with q functions V (q). (The notational di?erence is subtle, but important, in the expressions below!), 2 Z1 = ˆ q dq q|e?? p /2m e??V (ˆ) |q = ˆ dq e??V (q) q|e?? p /2m |q = ˆ dqdpdp?e??V (q) q|p p|e?? p /2m |p? p?|q = 2 2 1 2? dqdp e??H(p,q) where, in the ?nal line, we’ve used the identity q|p = ? 1 eipq/ 2? This completes the derivation. 2.2 Ideal Gas The ?rst classical gas that we’ll consider consists of N particles trapped inside a box of volume V . The gas is “ideal”. This simply means that the particles do not interact with each other. For now, we’ll also assume that the particles have no internal structure, so no rotational or vibrational degrees of freedom. This situation is usually referred to as the monatomic ideal gas. The Hamiltonian for each particle is simply the kinetic energy, H= p2 2m – 33 – And the partition function for a single particle is Z1 (V, T ) = 1 (2? )3 d3 qd3 p e??p 2 /2m (2.4) The integral over position is now trivial and gives d3 q = V , the volume of the box. The integral over momentum is also straightforward since it factorizes into separate integrals over px , py and pz , each of which is a Gaussian of the form, 2 dx e?ax = ? a So we have Z1 = V mkB T 2? 2 3/2 We’ll meet the combination of factors in the brackets a lot in what follows, so it is useful to give it a name. We’ll write Z1 = V ?3 (2.5) The quantity ? goes by the name of the thermal de Broglie wavelength, ?= 2? 2 mkB T (2.6) ? has the dimensions of length. We will see later that you can think of ? as something like the average de Broglie wavelength of a particle at temperature T . Notice that it is a quantum object – it has an sitting in it – so we expect that it will drop out of any genuinely classical quantity that we compute. The partition function itself (2.5) is counting the number of these thermal wavelengths that we can ?t into volume V . Z1 is the partition function for a single particle. We have N, non-interacting, particles in the box so the partition function of the whole system is N Z(N, V, T ) = Z1 = VN ?3N (2.7) (Full disclosure: there’s a slightly subtle point that we’re brushing under the carpet here and this equation isn’t quite right. This won’t a?ect our immediate discussion and we’ll explain the issue in more detail in Section 2.2.3.) – 34 – Figure 8: Deviations from ideal gas law at sensible densities Figure 9: Deviations from ideal gas law at extreme densities Armed with the partition function Z, we can happily calculate anything that we like. Let’s start with the pressure, which can be extracted from the partition function by ?rst computing the free energy (1.36) and then using (1.35). We have ?F ?V ? = (kB T log Z) ?V NkB T (2.8) = V This equation is an old friend – it is the ideal gas law, pV = NkB T , that we all met in kindergarten. Notice that the thermal wavelength ? has indeed disappeared from the discussion as expected. Equations of this form, which link pressure, volume and temperature, are called equations of state. We will meet many throughout this course. p=? As the plots above show4 , the ideal gas law is an extremely good description of gases at low densities. Gases deviate from this ideal behaviour as the densities increase and the interactions between atoms becomes important. We will see how this comes about from the viewpoint of microscopic forces in Section 2.5. It is worth pointing out that this derivation should calm any lingering fears that you had about the de?nition of temperature given in (1.7). The object that we call T really does coincide with the familiar notion of temperature applied to gases. But the key property of the temperature is that if two systems are in equilibrium then they have the same T . That’s enough to ensure that equation (1.7) is the right de?nition of temperature for all systems because we can always put any system in equilibrium with an ideal gas. 4 Both ?gures are taken from the web textbook “General Chemistry” and credited to John Hutchin- son. – 35 – 2.2.1 Equipartition of Energy The partition function (2.7) has more in store for us. We can compute the average energy of the ideal gas, E=? ? 3 log Z = NkB T ?? 2 (2.9) There’s an important, general lesson lurking in this formula. To highlight this, it is worth repeating our analysis for an ideal gas in arbitrary number of spatial dimensions, D. A simple generalization of the calculations above shows that Z= VN ?DN ? E= D NkB T 2 Each particle has D degrees of freedom (because it can move in one of D spatial directions). And each particle contributes 1 DkB T towards the average energy. This 2 is a general rule of thumb, which holds for all classical systems: the average energy of each free degree of freedom in a system at temperature T is 1 kB T . This is called the 2 equipartition of energy. As stated, it holds only for degrees of freedom in the absence of a potential. (There is a modi?ed version if you include a potential). Moreover, it holds only for classical systems or quantum systems at suitably high temperatures. We can use the result above to see why the thermal de Broglie wavelength (2.6) can be thought of as roughly equal to the average de Broglie wavelength of a particle. Equating the average energy (2.9) to the kinetic energy E = p2 /2m tells us that the ? average (root mean square) momentum carried by each particle is p ? mkB T . In quantum mechanics, the de Broglie wavelength of a particle is ?dB = h/p, which (up to numerical factors of 2 and ?) agrees with our formula (2.6). Finally, returning to the reality of d = 3 dimensions, we can compute the heat capacity for a monatomic ideal gas. It is CV = ?E ?T V 3 = NkB 2 (2.10) 2.2.2 The Sociological Meaning of Boltzmann’s Constant We introduced Boltzmann’s constant kB in our original the de?nition of entropy (1.2). It has the value, kB = 1.381 × 10?23 JK ?1 In some sense, there is no deep physical meaning to Boltzmann’s constant. It is merely a conversion factor that allows us to go between temperature and energy, as re?ected – 36 – in (1.7). It is necessary to include it in the equations only for historical reasons: our ancestors didn’t realise that temperature and energy were closely related and measured them in di?erent units. Nonetheless, we could ask why does kB have the value above? It doesn’t seem a particularly natural number. The reason is that both the units of temperature (Kelvin) and energy (Joule) are picked to re?ect the conditions of human life. In the everyday world around us, measurements of temperature and energy involve fairly ordinary numbers: room temperature is roughly 300 K; the energy required to lift an apple back up to the top of the tree is a few Joules. Similarly, in an everyday setting, all the measurable quantities — p, V and T — in the ideal gas equation are fairly normal numbers when measured in SI units. The only way this can be true is if the combination NkB is a fairly ordinary number, of order one. In other words the number of atoms must be huge, N ? 1023 (2.11) This then is the real meaning of the value of Boltzmann’s constant: atoms are small. It’s worth stressing this point. Atoms aren’t just small: they’re really really small. 1023 is an astonishingly large number. The number of grains of sand in all the beaches in the world is around 1018 . The number of stars in our galaxy is about 1011 . The number of stars in the entire visible Universe is probably around 1022 . And yet the number of water molecules in a cup of tea is more than 1023 . Chemist Notation While we’re talking about the size of atoms, it is probably worth reminding you of the notation used by chemists. They too want to work with numbers of order one. For this reason, they de?ne a mole to be the number of atoms in one gram of Hydrogen. (Actually, it is the number of atoms in 12 grams of Carbon-12, but this is roughly the same thing). The mass of Hydrogen is 1.6 × 10?27 Kg, so the number of atoms in a mole is Avogadro’s number, NA ? 6 × 1023 The number of moles in our gas is then n = N/NA and the ideal gas law can be written as pV = nRT where R = NA kB is the called the Universal gas constant. Its value is a nice sensible number with no silly power in the exponent: R ? 8 JK ?1 mol?1 . – 37 – 2.2.3 Entropy and Gibbs’s Paradox “It has always been believed that Gibbs’s paradox embodied profound thought. That it was intimately linked up with something so important and entirely new could hardly have been foreseen.” Erwin Schr¨dinger o We said earlier that the formula for the partition function (2.7) isn’t quite right. What did we miss? We actually missed a subtle point from quantum mechanics: quantum particles are indistinguishable. If we take two identical atoms and swap their positions, this doesn’t give us a new state of the system – it is the same state that we had before. (Up to a sign that depends on whether the atoms are bosons or fermions – we’ll discuss this aspect in more detail in Sections 3.5 and 3.6). However, we haven’t N taken this into account – we wrote the expression Z = Z1 which would be true if all the N particles in the were distinguishable — for example, if each of the particles were of a di?erent type. But this naive partition function overcounts the number of states in the system when we’re dealing with indistinguishable particles. It is a simple matter to write down the partition function for N indistinguishable particles. We simply need to divide by the number of ways to permute the particles. In other words, for the ideal gas the partition function is 1 N VN Zideal (N, V, T ) = Z = (2.12) N! 1 N!?3N The extra factor of N! doesn’t change the calculations of pressure or energy since, for each, we had to di?erentiate log Z and any overall factor drops out. However, it does change the entropy since this is given by, ? (kB T log Zideal ) ?T which includes a factor of log Z without any derivative. Of course, since the entropy is counting the number of underlying microstates, we would expect it to know about whether particles are distinguishable or indistinguishable. Using the correct partition function (2.12) and Stirling’s formula, the entropy of an ideal gas is given by, S= S = NkB log V N?3 + 5 2 (2.13) This result is known as the Sackur-Tetrode equation. Notice that not only is the entropy sensitive to the indistinguishability of the particles, but it also depends on ?. However, the entropy is not directly measurable classically. We can only measure entropy di?erences by the integrating the heat capacity as in (1.10). – 38 – The necessity of adding an extra factor of N! was noticed before the advent of quantum mechanics by Gibbs. He was motivated by the change in entropy of mixing between two gases. Suppose that we have two di?erent gases, say red and blue. Each has the same number of particles N and sits in a volume V, separated by a partition. When the partition is removed the gases mix and we expect the entropy to increase. But if the gases are of the same type, removing the partition shouldn’t change the macroscopic state of the gas. So why should the entropy increase? This was referred to as the Gibb’s paradox. Including the factor of N! in the partition function ensures that the entropy does not increase when identical atoms are mixed. 2.2.4 The Ideal Gas in the Grand Canonical Ensemble It is worth brie?y looking at the ideal gas in the grand canonical ensemble. Recall that in such an ensemble, the gas is free to exchange both energy and particles with the outside reservoir. You could think of the system as some ?xed subvolume inside a much larger gas. If there are no walls to de?ne this subvolume then particles, and hence energy, can happily move in and out. We can ask how many particles will, on average, be inside this volume and what ?uctuations in particle number will occur. More importantly, we can also start to gain some intuition for this strange quantity called the chemical potential, µ. The grand partition function (1.39) for the ideal gas is ? Zideal (µ, V, T ) = e?µN Zideal (N, V, T ) = exp N =0 e?µ V ?3 From this we can determine the average particle number, N= 1 ? e?µ V log Z = ? ?µ ?3 Which, rearranging, gives µ = kB T log ?3 N V (2.14) If ?3 < V /N then the chemical potential is negative. Recall that ? is roughly the average de Broglie wavelength of each particle, while V /N is the average volume taken up by each particle. But whenever the de Broglie wavelength of particles becomes comparable to the inter-particle separation, then quantum e?ects become important. In other words, to trust our classical calculation of the ideal gas, we must have ?3 ? V /N and, correspondingly, µ < 0. – 39 – At ?rst sight, it is slightly strange that µ is negative. When we introduced µ in Section 1.4.1, we said that it should be thought of as the energy cost of adding an extra particle to the system. Surely that energy should be positive! To see why this isn’t the case, we should look more closely at the de?nition. From the energy variation (1.38), we have µ= ?E ?N S,V So the chemical potential should be thought of as the energy cost of adding an extra particle at ?xed entropy and volume. But adding a particle will give more ways to share the energy around and so increase the entropy. If we insist > 0. This can occur if we have a suitably strong repulsive interaction between particles so that there’s a large energy cost associated to throwing in one extra. We also have µ > 0 for fermion systems at low temperatures as we will see in Section 3.6. We can also compute the ?uctuation in the particle number, ?N 2 = 1 ?2 log Zideal = N ? 2 ?µ2 ? As promised in Section 1.4.1, the relative ?uctuations ?N/ N = 1/ N are vanishingly small in the thermodynamic N ? ? limit. Finally, it is very easy to compute the equation of state in the grand canonical ensemble because (1.45) and (1.48) tell us that pV = kB T log Z = kB T e?µ V = kB T N ?3 (2.15) which gives us back the ideal gas law. 2.3 Maxwell Distribution Our discussion above focusses on understanding macroscopic properties of the gas such as pressure or heat capacity. But we can also use the methods of statistical mechanics to get a better handle on the microscopic properties of the gas. Like everything else, the information is hidden in the partition function. Let’s return to the form of the single particle partition function (2.4) before we do the integrals. We’ll still do the – 40 – Figure 10: Maxwell distribution for Noble gases: He, N e, Ar and Xe. trivial spatial integral d3 q = V , but we’ll hold o? on the momentum integral and instead change variables from momentum to velocity, p = mv. Then the single particle partition function is Z1 = m3 V (2? )3 d3 v e??mv 2 /2 = 4?m3 V (2? )3 dv v 2 e??mv 2 /2 We can compare this to the original de?nition of the partition function: the sum over states of the probability of that state. But here too, the partition function is written as a sum, now over speeds. The integrand must therefore have the interpretation as the probability distribution over speeds. The probability that the atom has speed between v and v + dv is f (v)dv = N v 2 e?mv 2 /2k BT dv (2.16) where the normalization factor N can be determined by insisting that probabilities ? sum to one, 0 f (v) dv = 1, which gives N = 4? m 2?kB T 3/2 This is the Maxwell distribution. It is sometimes called the Maxwell-Boltzmann distribution. Figure 10 shows this distribution for a variety of gases with di?erent masses at the same temperature, from the slow heavy Xenon (purple) to light, fast Helium (blue). We can use it to determine various average properties of the speeds of atoms – 41 – in a gas. For example, the mean square speed is ? v2 = dv v 2 f (v) = 0 3kB T m This is in agreement with the equipartition of energy: the average kinetic energy of the 3 gas is E = 1 m v 2 = 2 kB T . 2 Maxwell’s Argument The above derivation tells us the distribution of velocities in a non-interacting gas of particles. Remarkably, the Maxwell distribution also holds in the presence of any interactions. In fact, Maxwell’s original derivation of the distribution makes no reference to any properties of the gas. It is very slick! Let’s ?rst think about the distribution of velocities in the x direction; we’ll call this distribution ?(vx ). Rotational symmetry means that we must have the same distribution of velocities in both the y and z directions. However, rotational invariance also requires that the full distribution can’t depend on the direction of the velocity; it 2 2 2 can only depend on the speed v = vx + vy + vz . This means that we need to ?nd functions F (v) and ?(vx ) such that F (v) dvxdvy dvz = ?(vx )?(vy )?(vz ) dvx dvy dvz It doesn’t look as if we possibly have enough information to solve this equation for both F and ?. But, remarkably, there is only one solution. The only function which satis?es this equation is 2 ?(vx ) = Ae?Bvx for some constants A and B. Thus the distribution over speeds must be 2 F (v) dvxdvy dvz = 4?v 2 F (v) dv = 4?A3 v 2 e?Bv dv We see that the functional form of the distribution arises from rotational invariance alone. To determine the coe?cient B = m/2kB T we need the more elaborate techniques of statistical mechanics that we saw above. (In fact, one can derive it just from equipartition of energy). 2.3.1 A History of Kinetic Theory The name kinetic theory refers to the understanding the properties of gases through their underlying atomic constituents. The discussion given above barely scratches the surface of this important subject. – 42 – Kinetic theory traces its origin to the work of Daniel Bernoulli in 1738. He was the ?rst to argue that the phenomenon that we call pressure is due to the constant bombardment of tiny atoms. His calculation is straightforward. Consider a cubic box with sides of length L. Suppose that an atom travelling with momentum vx in the x direction bounces elastically o? a wall so that it returns with velocity ?vx . The particle experiences a change in momentum is ?px = 2mvx . Since the particle is trapped in a box, it will next hit the wall at a time ?t = 2L/vx later. This means that the force on the wall due to this atom is F = 2 ?px mvx = ?t L Summing over all the atoms which hit the wall, the force is F = 2 Nm vx L 2 where vx is the average velocity in the x-direction. Using the same argument as we 2 gave in Maxwell’s derivation above, we must have vx = v 2 /3. Thus F = Nm v 2 /3L and the pressure, which is force per area, is given be p= Nm v 2 Nm v 2 = 3L3 3V If this equation is compared to the ideal gas law (which, at the time, had only experimental basis) one concludes that the phenomenon of temperature must arise from the kinetic energy of the gas. Or, more precisely, one ?nds the equipartition result that we 1 3 derived previously: 2 m v 2 = 2 kB T . After Bernoulli’s pioneering work, kinetic theory languished. No one really knew what to do with his observation nor how to test the underlying atomic hypothesis. Over the next century, Bernouilli’s result was independently rediscovered by a number of people, all of whom were ignored by the scienti?c community. One of the more interesting attempts was by John Waterson, a Scottish engineer and naval instructor working for the East India Company in Bombay. Waterson was considered a crackpot. His 1843 paper was rejected by the Royal Society as “nothing but nonsense” and he wrote up his results in a self-published book with the wonderfully crackpot title “Thoughts on Mental Functions”. The results of Bernouilli and Waterson ?nally became accepted only after they were re-rediscovered by more established scientists, most notably Rudolph Clausius who, in 1857, extended these ideas to rotating and vibrating molecules. Soon afterwards, in – 43 – 1859, Maxwell gave the derivation of the distribution of velocities that we saw above. This is often cited as the ?rst statistical law of physics. But Maxwell was able to take things further. He used kinetic theory to derive the ?rst genuinely new prediction of the atomic hypothesis: that the viscosity of a gas is independent of its density. Maxwell himself wrote, ”Such a consequence of the mathematical theory is very startling and the only experiment I have met with on the subject does not seem to con?rm it.” Maxwell decided to rectify the situation. With help from his wife, he spent several years constructing an experimental apparatus in his attic which was capable of providing the ?rst accurate measurements of viscosity of gases5 . His surprising theoretical prediction was con?rmed by his own experiment. There are many further developments in kinetic theory which we will not cover in this course. Perhaps the most important is the Boltzmann equation. This describes the evolution of a particle’s probability distribution in position and momentum space as it collides with other particles. Stationary, unchanging, solutions bring you back to the Maxwell-Boltzmann distribution, but the equation also provides a framework to go beyond the equilibrium description of a gas. You can read about this in the books by Reif, Kardar or many many other places. 2.4 Diatomic Gas “I must now say something about these internal motions, because the greatest di?culty which the kinetic theory of gases has yet encountered belongs to this part of the subject”. James Clerk Maxwell, 1875 Consider a molecule that consists of two atoms in a bound state. We’ll construct a very simple physicist’s model of this molecule: two masses attached to a spring. As well as the translational degrees of freedom, there are two further ways in which the molecule can move • Rotation: the molecule can rotate rigidly about the two axes perpendicular to the axis of symmetry, with moment of inertia I. (For now, we will neglect the 5 You can see the original apparatus down the road in the corridor of the Cavendish lab. Or, if you don’t fancy the walk, you can simply click here: http://www-outreach.phy.cam.ac.uk/camphy/museum/area1/exhibit1.htm – 44 – rotation about the axis of symmetry. It has very low moment of inertia which will ultimately mean that it is unimportant). • Vibration: the molecule can oscillate along the axis of symmetry We’ll work under the assumption that the rotation and vibration modes are independent. In this case, the partition function for a single molecule factorises into the product of the translation partition function Ztrans that we have already calculated (2.5) and the rotational and vibrational contributions, Z1 = Ztrans Zrot Zvib We will now deal with Zrot and Zvib in turn. Rotation The Lagrangian for the rotational degrees of freedom is6 1 ? ? Lrot = I(?2 + sin2 ??2 ) 2 (2.17) The conjugate momenta are therefore p? = ?Lrot . . . . . . . . . . .Additional Requirements Min Pages: 1 Level of Detail: Show all work Other Requirements: Problem 1: Make sure that E/kB is labeled in your plot. The plot should be drawn based on the notes given (a), (b) and (c) in the problem. Also, make sure that detailed explanations are given in all problems!!!