This post is about the do_cos(double x, double dx) function from s_sin.c, following on from my previous post on do_sin. The function computes an approximation of \(\cos(x + dx)\) when dx is so small that the floating point addition x + dx might not be exact. do_cos is defined on lines 95-117 of the s_sin.c source code.

It is called in do_sin when the number x whose sin is to be computed is between 0.855469 and 2.426265 in absolute value. The arguments passed to do_cos are hp0-fabs(x) and hp1, where \(\mathtt{hp0}+\mathtt{hp1}\) is a double-double approximation of \(\pi/2\), so the first argument will be between -0.855469 and 0.715327. Other calls are either directly in __cos where the second argument 0 and the first is between \(2^{-27}\) and 0.855469 in absolute value, or via do_sincos which is always given the output of one of the range reduction functions whose output is roughly in \((-\pi/4, \pi/4)\). Therefore the first argument to do_cos will always be in the interval \((-0.855469, 0.855469)\).

Here is a commented version of the do_cos code. I have expanded the SINCOS_TABLE_LOOKUP macro and renamed some variables for clarity. The code is simpler than that of do_sin because the output will always be between 1/2 and 1, so it is easier to get full precision.

static double do_cos(double x, double dx) {
  mynumber u;

  if (x < 0)
    dx = -dx;

  u.d = big + fabs(x);
  // big is 0x1.8000000000000p45
  // An ULP for big is 2^{-7}, while |x| < 1. Therefore the last
  // 7 bits of u.d will be the result of rounding x to 7 bits.
  double dy = fabs(x) - (u.d - big) + dx;
  // let y = u.d - big. Then y + dy = x + dx
  // and y is equal to x rounded to 7 binary places

  double dy_squared, s, sn, ssn, c, cs, ccs, cor;
  dy_squared = dy * dy;
  s = dy + dy * dy_squared * (sn3 + dy_squared * sn5);
  // s is an approximation of sin(dy)
  c = dy_squared * (cs2 + dy_squared * (cs4 + dy_squared * cs6));
  // c is an approximation of 1 - cos(dy)

  int k = u.i[LOW_HALF] << 2; // u.i[LOW_HALF] multiplied by 4
  // Interpreted as an integer, u.i[LOW_HALF] is the numerator m of the
  // closest fraction y=m/128 to x.
  // k is the index of the "row" of the table sincostab corresponding to y
  sn = __sincostab.d[k];
  ssn = __sincostab.d[k + 1]; // sn + ssn is a double-double approx of sin(y)
  cs = __sincostab.d[k + 2];
  ccs = __sincostab.d[k + 3]; // cs + ccs is a double-double approx of cos(y)
  // now cos(y+dy) = cos(y)cos(dy) - sin(y)sin(dy)
  // which is approximately (cs + ccs) * (1 - c) - (sn + ssn) * s
  // The ccs * c term is dropped, leaving
  return cs + ((ccs - s * ssn - cs * c) - sn * s);
}

The sizes of the quantities involved are as follows:

quantity lower bound upper bound
x -0.855469 0.855469
dx \(-2^{-40}\vert x\vert\) \(2^{-40} \vert x\vert\)
cos(x+dx) 0.656 1
dy -1/256 1/256
s error \(-2^{-61}\) \(2^{-61}\)
c error \(-2^{-69}\) \(2^{-69}\)
s \(-2^{-8}\) \(2^{-8}\)
c \(2^{-52}\) \(2^{-19}\)
ssn \(-2^{-54}\) \(2^{-54}\)
ccs \(-2^{-54}\) \(2^{-54}\)

where the errors are rough guesses based on a few hundred thousand calculated values. If the bounds are right then c * ccs is much less than an ulp for the final output. The smallest term kept is - ssn * s, which testing suggests only ever affects the output value in its final bit. A whole floating point multiplication and subtraction seems a high price to pay for the final bit - maybe there is a cheaper way.