Introduction

The properties of materials and their response to external perturbations are usually dependent on their conditions, such as composition, pressure, temperature, etc. 1.1, and the conditions of the environment they are in. For example, materials in the interior of planets are subjected to high pressures and high temperatures, and their responses to changes of these conditions are often important to understand how the planets behaves, e.g. how heat is transferred between different regions or how material flows.

The study of the properties of materials at finite temperature and pressure is traditionally the domain of classical thermodynamics, mainly developed in the first half of the 19th century by people like Clausius, Carnot, Joule, Thompson (Lord Kelvin) and others. They are phenomenological laws, i.e. derived from observations, and based on a handful of macroscopic parameters such as pressure, temperature, volume etc.

A different approach is provided by theories based on the microscopic constituents of matter, such as kinetics theory and statistical mechanics, developed mainly by Boltzmann, Maxwell and Gibbs in the second half of the 19th century. Here one considers the single constituents of a piece of matter, some $10^{23}$ of them, and in principle determines the exact state by measuring their positions and velocities at some particular time $t_0$. Solving the Newton's equation of motions ($10^{23}$ of them!) then provides the state of the system at any future (and past) time. In practice this is a formidable task, that is not even required, because observed physical properties and experimental measurements are obtained, mostly, as averages over the microscopic degrees of freedom. The main objective of statistical physics thus is that of obtaining macroscopic properties as statistical quantities, by averaging over the microscopic degrees of freedom. From this point of view, quantities such as the pressure exerted by a volume of gas are, strictly speaking, not completely defined and they are not constant in time, but since the number of molecules involved is so large the relative fluctuations are extremely small, and completely negligible for all practical purposes. We will see that the relative indeterminacy $\Delta property / property$ is of order $1/\sqrt{N}$, where $N$ is the number of particles in the system, and so of order of $10^{-12}$ for macroscopic systems 1.2. Take the pressure exerted by a gas enclosed in a volume $V$ for example, as shown in Fig.[*]. Pressure is exerted on the gas by a movable piston. Let $m$ be the mass of the piston plus the mass of the weight, then the force acting on the piston is $F = m g$, where $g$ is the acceleration of gravity. If the area of the piston is $A$, the external pressure acting on the gas is $P = F/A$. At equilibrium this pressure is balanced by the pressure of the gas, generated by gas particles impacting the piston. The particles of gas move in the box stochastically, and so the number of impacts per unit of time and the resulting momentum transfer to the piston is a fluctuating quantity, meaning that the piston is not at rest but it is moving up and down. These movements can indeed be measured with very sensitive experiments, but they are otherwise completely undetectable. For the most part, we will not be concerned with the description of these fluctuations and we will be content with the study of average properties.

Figure: Gas exerting pressure on a movable piston, balanced by an external force $F = m g$, where $m$ is the sum of the masses of the piston and the weight and $g$ is the acceleration of gravity.
\includegraphics[width=4cm]{figure1.pdf}

Amongst the macroscopic variable usually employed to characterise the status of a system temperature plays a special role, in that it is not directly associated to mechanical properties of the system such as its volume or its shape. The idea of temperature is usually associated to the concept of heat, and in particular heat flow. Two bodies at different temperatures put in contact with each other tend to transfer heat from the hot (high temperature) body to the cold (low temperature) one. Their temperatures change in the process and heat stops flowing once the two temperatures are equal. After that, the two bodies are in thermal equilibrium and no more spontaneous changes of temperature are possible. A standard way to measure the temperature of a body is to put it into contact with a thermometer, which is a device with physical properties that change with temperature in a known way, for example the volume of a mass of mercury. The problem with thermometers is that they are usually not linear and therefore it is difficult to define a temperature scale. For example, the centigrade scale is defined by considering the freezing and the boiling point of water at a pressure of 1 atmosphere (1 bar), and the degree Celsius (C) is defined as 1/100 of the temperature difference between these two points. Although one could determine with unlimited accuracy the volume of a lump of mercury at these two points, there is no guarantee that a linear interpolation between these two volumes gives a one-to-one correspondence with the centigrade scale. We will see that it is possible to avoid these difficulties and define temperature on an absolute thermodynamic scale.

In general, a system at one particular point in time may contain all kinds of irregularities, such as temperature, density and composition gradients. These gradients tend to disappear with time, but in general it is difficult to establish how and how long it will take for a particular dishomogeneity to be eliminated. Each process has a characteristic time for equilibrium to be established and it is possible that for some processes these times are extremely long. If the equilibration time is much longer than the timescale of the experiment, then it becomes difficult to establish experimentally what is the equilibrium state of the system. For example, the Earth is cooling, but at a rate that is very slow on a human time scale. There is therefore a temperature distribution in its interior that on a human timescale can be considered constant, but the system is clearly not in equilibrium. There are also sources of heat, such as the Sun and radioactive materials, meaning that the Earth system is not isolated. For an isolated system, we shall see that it is possible to define equilibrium in connection with the second law of thermodynamics, and we will mainly be interested in systems that have reached equilibrium, more or less ignoring how they got there and how long it took. The latter is the domain of non-equilibrium thermodynamics which will not be our concern. More generally, it is possible to discuss equilibrium also for systems that exchange quantities –such as energy, particles, or volume– with an external system, provided the union of the system and the external system can be regarded as isolated, e.g. there are no sources of energy, such as radioactive heating as mentioned above.

Once equilibrium has been reached the description of a system is usually simple. Take a homogeneous fluid for example, if there are no density or temperature gradients then the fluid is characterised by its mass $M$, volume $V$ and pressure $P$, and the temperature is a function of these variables:

$\displaystyle % requires amsmath; align* for no eq. number
T = f(P,V,M).$ (1.1)

Equation [*] is called equation of state (EOS) of the fluid, and can be expressed in many equivalent ways. The important point is that it relates $T, V, P$ and $M$. This relation is complicated in general, depending on the system. However, in the particular case of gases at low pressure the EOS assumes the very simple form:

$\displaystyle % requires amsmath; align* for no eq. number
PV \propto M T.$ (1.2)

The idealised system for which Eq. [*] is exactly valid is called perfect gas, or ideal gas, and the relation can be used to define a perfect gas temperature scale once we fix one point. The degree Kelvin, used to measure temperature on the perfect gas temperature scale, is defined by taking the triple point of water as the fixed point, i.e. the point at which solid, liquid and vapour coexist (see Sec. [*]) and assigning to its temperature the value $T_{tr} = 273.16$ K (the pressure of the triple point of water is $P_{tr}=0.006$ atm). Such a definition of the 'size' of the degree Kelvin is arbitrary, of course, but has the advantage that there are exactly 100 K between the freezing and the boiling point of water at a pressure of 1 bar and therefore temperature differences measured in Kelvin are the same as those measured in Celsius. On this scale the freezing point of water has a temperature of 273.15 K. The constant of proportionality in Eq. [*] has to be determined experimentally and is proportional to the number of gas particles. For a mole of gas - containing an Avogadro number of particles $N_A = 6.022 \times 10^{23}$ mol$^{-1}$ - the gas constant is $R = 8.314$ J/(mol K) and we can write:

$\displaystyle % requires amsmath; align* for no eq. number
P\mathcal{V} = R T = N_A k_{\rm B}T,$ (1.3)

where $\mathcal{V}$ is the molar volume ( $22.4 \times 10^{-3}$ m$^3$ at $T = 273.15$ K and $P = 10^5$ Pa) and $k_{\rm B}= 1.38 \times 10^{-23}$ J/K is the Boltzmann constant. Temperature is rarely meaningful on its own, with the relevant quantity often being the energy $k_{\rm B}T$. For a temperature of 300 K this corresponds to an energy of 0.0258 eV, where 1 eV = 1.602 $\times 10^{-19}$ J, or alternatively 1 eV corresponds to a temperature of 11605 K.

Note that Eqs. [*],[*] are only meaningful for $T \ge 0$, but we shall see that a more general thermodynamic definition of temperature does assign physical meaning to negative temperatures in some cases. Loosely speaking, we shall see that temperature is a measure of the change of (the logarithm of) the number of accessible states to a system as its energy increases. Normally this is an increasing function of the energy, but for systems that are limited in energy from above (e.g. spin systems in a magnetic field) the number of states increases first, goes through a maximum and then decreases, formally corresponding to negative temperatures in the latter regime.

In the remaining of these notes we will remind the reader about statistical physics concepts that can be used to obtain thermo-physical properties of materials. These can be computed analytically for some special systems, such as the ideal gas mentioned above, but for more general cases in which the particles interact with each other (e.g. solids, liquid, and almost any system in the Universe) one needs an approach in which these interactions are taken into account. The main purpose of these notes is to develop a formalism that allows, at least for some selected cases, to compute the thermo-physical properties of materials using computer simulation. We will not be concerned with the description of the details of the interactions between microscopic particles such as energies and forces. This is the realm of quantum mechanics, to be discussed elsewhere, although we will briefly outline the popular formulation of quantum mechanics known as density functional theory (see chapter [*]). We will then show how to deploy a general method that gives access to energies and forces to compute thermodynamic properties of materials.