Catastrophic cancellation in action


In order to obtain the reference values needed for checking results of numerical software that use data types like double precision (DP), I like to harness three multiple-precision (MP) tools, i.e., xNumbers, Maxima, and PY-mpmath. If all the MP results agree to the last decimal digit, I take them for granted because it seems very unlikely to me that the mentioned tools are mutual clones of each other. Moreover, even effects caused by routines of an OS can be identified because mpmath as well as Maxima are supported on Linux and Windows as well. When saving the MP results to CSV-formatted files, all the MP numbers need to be preceded by a guard character in order to prevent the spreadsheet software, which will later be used to carry out the comparison, from stealing most of the decimal digits. Unfortunately, even parsers of high-level programming languages fail astonishingly often when converting numerical strings into the values of DP variables. Therefore, I usually save not only DP results, but their bytes in hex notation as well. According to my experience, these hex values provide the only reliable kind of how to represent data meant for exchange between any two different computer environments. It is particularly useful in case of floating-point numbers being much smaller than one, because in decimal notation a really huge amount of decimal figures may be necessary in order to uniquely denote a specific DP value.

The tangent function near π/2

The tangent function has a so-called pole () at π/2, i.e., with x approaching π/2 from below, tan(x) tends to infinity, and with x approaching π/2 from above, tan(x) tends to minus infinity. This behaviour contradicts the maximum-smoothness paradigm frequently used when designing an algorithm for a built-in function. Near π/2, correctly approximating the tangent function can only be achieved by using a model function that has a pole at π/2, too. Though it is not uncommon for built-in functions to use several different polynomial representations over the range they are defined on, the usage of model functions having poles is surprisingly rare.
For example, many people have realised that the tangent functions of MS-Excel&register;™ 2002 and VBA6 as well lack the change of modelling paradigm in their model functions. Consequently, over some interval around the mathematically exact value of π/2, they both yield very in-accurate values for tan(x). In the next figure, the effect is demonstrated for the 49 DP values bracketing the exact value of π/2.

The maximum error is found outside the range of this figure; it occurs at pio2_DP, by which the DP value closest to π/2 is denoted here, and which is larger than π/2 by about 1.22E-17. In case of XL2002, the maximum relative error is found to be about -4.1E-4, in case of VBA6, it is 3.3E-5. Obviously, different model functions are being used for the tangent function in XL2002 and VBA6, resp. The relative errors of the tangent function in XL2002 are about 12.4 times bigger than those in VBA6 and of opposite sign! This means XL2002 can hardly be used to try out an algorithm in a spreadsheet before encoding it in VBA6… interesting…

Some people obviously know of the trigonometric identity tan(x)=1/tan(π/2-x), and have tried to use it in order to obtain a more reliable value of tan(x) in case x is very close to π/2. Applying this trigonometric identity naively to a value of x near π/2 in DP even leads to relative errors that are at least about 5 orders of magnitude bigger than those found in case of the flawed built-in tangent functions are. Thus, defining w_DP = pio2_DP – x_DP and then using 1/tan(w_DP) as a substitute for tan(x) is a very bad idea, as can be seen from the next figure.

From comparisons to results of MP software, it is known that around zero the tangent functions in XL2002 and VBA6 as well yield identical values that moreover are correct to DP precision. This means there must be some odd effect that renders the difference pio2_DP – x meaningless in case x is close to pio2_DP. This effect is easily identified: it is called catastrophic cancellation. π is an irrational number that definitely cannot be represented exactly by the sparse set that all the rational DP values form within the dense set of real values. Mathematicians would say something like “The set comprising all the DP values is of measure zero within the set of real numbers”, by which they mean that the probability any given real number can be exactly represented by a DP value is almost always zero.
Imho, this points out one of FORTRAN’s really big crimes: calling a data type “real” that in fact can only represent rational numbers; moreover, it can represent even only a tiny subset of all the rational numbers. Nevertheless, the set of DP numbers has been cleverly designed in order to cover a range of numerical values that should be sufficient for almost all numerical efforts of humans. Unfortunately, this claim does not hold true for the precision of the DP numbers.

The DP value nearest to π/2, pio2_DP, is found to be 3F F9 21 FB 54 44 2D 18 in big endian DP hex notation, which corresponds to about +1.570796326794896558 in decimal notation. Theoretically, exactly identical problems should occur around pi_DP, which is represented by 40 09 21 FB 54 44 2D 18 or about +3.141592653589793116, resp., because not only the error will double from 6.12E-17 to 1.22E-16, but the distance between adjacent DP number will do that too, as 1 < π/2 < 2 and 2 < π < 4. The next figure provides the position of π in the three floating-point formats SP, DP, and QP. All the differences are given in ULPs, an abbreviation that means “unit in the last place” and thus is a relative measure. The corresponding absolute measures are given in the figure. There is one important detail: in SP, which was the dominant floating point format until about 1985, the value nearest to π was larger than the mathematical value, whereas in DP as well as in QP it is smaller than π. This caused much trouble when code was ported from SP to DP about thirty years ago… and fortunately will cause much less trouble when code is ported from DP to QP in the near future.

It is now clear why the cancellation that occurs when π/2-x is calculated as pio2_DP-x_DP is to be called catastrophic: the closer this difference gets to zero, the more prominent the consequences of the in-accuracy in pio2_DP will be. Fortunately, the cure is easy enough: simply provide all the bits lost in cancellation in order to keep the result accurate to the full precision of 53 bits. This can be achieved by defining the DP variable v_DP, which is meant to represent the DP values of π/2-x, as follows: v_DP = [(pio2_DP – x_DP) + pio2_DPcorr]. Therein, pio2_DPcorr contains the trailing 53 bits that need to be added to pio2_DP in order to yield an approximation to π/2 with a precision of 106 bits. Any bits that are lost due to cancellation in the difference (pio2_DP – x_DP) will be filled in from pio2_DPcorr, thus maintaining a precision of 53 bits under all circumstances. Consequently, 1/tan(v) does not show any systematic errors and should be used to replace tan(x) in the region where it is flawed, as can be seen from the next figures.

According to my experience, the tangent function in XL2010 is still flawed around π/2, but the one in VBA7 is not. Keep in mind that mathematically unnecessary brackets can really matter in Excel, i.e., in case x_DP is very close to pio2_DP, entering something corresponding to “=(pio2_DP – x_DP)” in a cell will produce exact results, whereas “= pio2_DP – x_DP” will yield a large interval filled with zeros around x_DP = pio2_DP.

In general, it is hard to predict whether a software implements a tangent function that is precise around π/2 or not. In order to enable you to check for that, here’s to you the two DP values bracketing π2 together with the correct DP values of the tangent function at these values:

x_lo_DP : 3F F9 21 FB 54 44 2D 18 // +1.570796326794896558
x_hi_DP : 3F F9 21 FB 54 44 2D 19 // +1.570796326794896780
tan(x_lo_DP)_DP : 43 4D 02 96 7C 31 CD B5 // +16331239353195370.0
tan(x_hi_DP)_DP : C3 36 17 A1 54 94 76 7A // -6218431163823738.0

In case you would like to implement your own tangent function using the helper variable v as defined above, you need to subtract your x from the value of x_lo_DP given above; the DP that needs to be added to that result in order to maintain the full precision of 53 bits is:

pio2corr : 3C 91 A6 26 33 14 5C 07 // +6.123233995736766036E-17

The level of precision of such pairs of DP values is often referred to by the term “double-double” (DD). In this context, replacing a single line of code in DP precision by its equivalent in DD made the algorithm work properly. As long as the IEEE QP floating-point data type is not widely available in numerical software, using libraries written in DD may solve many problems that are caused by catastrophic cancellation in 53 bits. Depending on whether the programming language used to implement such a library makes use of the high-precision internal registers of modern CPUs or not, some care must be taken in order to make them work properly. Imho, VBA does not use them, but modern versions of FORTRAN, C, and Python etc. do. In this case, the line v = (pio2_DP – x_DP) + pio2_DP_corr would need to be modified in order to assure that the difference (pio2_DP – x_DP) is rounded down to DP precision before it is used again in the addition part. Thus, it has to be pulled out of the CPU and written to memory, from where it is to be re-read then.

Obviously, the procedure of partial enhancement of precision as described in this post has a variety of applications. One of the most important examples of where it is necessary to be implemented is the so-called argument reduction that is frequently applied in all subjects that deal with periodic processes.


Comments are closed.