A student in my introductory Python class asked me why this happens. Try it yourself:

from math import sin, cos, pi
print(sin(pi/2)) # 1.0
print(cos(pi/2)) # 6.123233995736766e-17

The actual methods by which sin and cos are calculated are rather complicated (I’ll return to this in future posts) and the detail depends on exactly what system you are using, but if you assume that they give answers that are as near to correct as possible (probably not quite true in all cases, but not far off) then you can show using Taylor’s theorem that this must happen (and even figure out what the mysterious number \(6.123\times 10^{-17}\) means).

Python floats are really c doubles, so on my system at least they’re IEEE 754 64b double-precision floats. That means, roughly speaking, that floating point arithmetic in Python is the arithmetic of binary numbers stored to a fixed number of significant figures.

The number you get when you type pi/2 is actually a rounded version of \(\pi/2\), so it equals \(\pi/2 + \epsilon\) where \(\epsilon\) is a very small (negative) number, around \(2^{-s}\) in magnitude where \(s\) is the number of binary significant figures. If we believe that the sin and cos functions in Python are as near to correct as possible then the output when you type sin(pi/2) should be the floating point number nearest to \(\sin(\pi/2 + \epsilon)\). We can examine this number using Taylor’s theorem with the Lagrange form of the remainder:

\[\sin(\pi/2+\epsilon) = \sin(\pi/2) + \frac{\cos(\pi/2)}{1!}\epsilon - \frac{\sin(\xi)}{2!}\epsilon^2\]

for some \(\xi\) between \(\pi/2+\epsilon\) and \(\pi/2\). Because \(\cos(\pi/2)=0\), this is \(1-\sin(\xi)\epsilon^2/2\). But \(\vert\sin(\xi)\epsilon^2/2\vert \leqslant 2^{-2s-1}\) which is so small that the result of \(1-\sin(\xi)\epsilon^2/2\), to \(s\) binary significant figures, will just be 1. This explains the result of sin(pi/2).

For cos(pi/2) we can do a similar Taylor expansion:

\[\cos(\pi/2+\epsilon) = \cos(\pi/2) - \frac{\sin(\pi/2)}{1!}\epsilon - \frac{\cos(\pi/2)}{2!}\epsilon^2 + \frac{\sin(\xi)}{3!}\epsilon^3\]

for some \(\xi\) between \(\pi/2+\epsilon\) and \(\pi/2\). This is \(-\epsilon + \sin(\xi)\epsilon^3/6\) and again \(\vert\sin(\xi)\epsilon^3/6\vert \leqslant 2^{-3s-2}\) will be too small to affect \(-\epsilon\) to \(s\) binary significant figures, so the result of cos(pi/2) should be (the nearest floating point number to) \(-\epsilon\). You can check on a calculator (or in Python) that this is roughly plausible: the true value of \(s\) for Python floats is 53 and \(2^{-53} \approx 1.11\times 10^{-16}\) which is the same order of magnitude as the value that Python outputs for cos(pi/2) .