Archive for the Software tools Category

Oops, XL did it again

Posted in Computer, Microsoft Excel, multi-precision calculations, Software, Software tools, Spreadsheet, xNumbers on June 22, 2014 by giorgiomcmlx

I really do love software that adheres to the "Law of Least Astonishment (LoLA)" (please see §4.1). Unfortunately, XL definitely does not belong to this group. Quite recently, I came across a violation of the LoLA by XL that is caused by the inconsistent manner of how functions evaluate numerical cells. MS have stated relatively clearly that the binary double-precision floating-point format (DP) is used internally in order to store numerical cell values. On the other hand, in the model-view-controller (MVC) of XL, i.e. a cell, a decimal string comprising at maximum 15 digits is used to display the numerical value of a cell to humans. To my great astonishment, it has turned out that different functions belonging to the same category in the official list of functions of XL2013 may operate on these two very different representations of numerical values. Of course, this can have severe consequences, as a precision of 15 decimal digits roughly corresponds to a precision of only 50 bits, whereas the 53 bits of precision of the IEEE754 DP format roughly correspond to 16 decimal digits.

I suppose virtually nobody regularly working with numbers will expect a cell formula like "= (D5 - FLOOR.PRECISE(D5;1))" to return a negative value in case the value of D5 is positive. In the official description of this function MS states: "Returns a number that is rounded down to the nearest integer or to the nearest multiple of significance. Regardless of the sign of the number, the number is rounded down." Moreover, the function FLOOR.PRECISE is listed in the "math and trigonometry" section of the aforementioned list, so that IMHO people will expect this function to be as accurate as the functions SQR and SIN are, for example. Yet, look what happens if the value of D5 is close to but still smaller than a positive integer. I’ll demonstrate that now in case of the integer 12 (twelve), which in DP big-endian byte order equals 40 28 00 00 00 00 00 00.

The hex representation of a numerical cell value as well as its equivalent exact decimal string were obtained by harnessing the commands "= Dbl2Hex(CellRef)" and "= DCStr(CellRef;100)", resp., which require an appropriate version of the FOSS add-in xNumbers to be installed in XL.

The largest DP value that is smaller than 12, but for which the XL2013x64 functions FLOOR, FLOOR.MATH, FLOOR.PRECISE, INT, ROUNDDOWN, and TRUNC return the mathematically correct result of 11 (eleven), has a hex notation of 40 27 FF FF FF FF FF E3. For all the other 28 DP values from 40 27 FF FF FF FF FF E4 up to 40 27 FF FF FF FF FF FF, the wrong value of 12 (twelve) is returned by all of these functions. Exactly for these 28 values the results of the formula "= (CellRef - FLOOR.PRECISE(CellRef;1))" turn out to be negative: about -4.97E-14 is returned by XL for the input corresponding to 40 27 FF FF FF FF FF E4, and about -1.776E-15 for 40 27 FF FF FF FF FF FF. If the mathematically unnecessary outer "magic parentheses" are omitted in the formula, XL tries to hide the dirty little secrets of its floating-point core from you by artificially zeroing the results for all values from 40 27 FF FF FF FF FF F9 to 40 27 FF FF FF FF FF FF. IMHO, both of these undocumented effects clearly exclude XL from the set of numerical software that might be considered for any serious computation (VBA is different!).

The observed results could be most easily explained by assuming that all these functions do not use the binary DP representation of the cell content in order to calculate their results, but operate on the numerical string comprising 15 decimal digits in the MVC instead. At least we have:
40 27 FF FF FF FF FF E3 hex =
XL: 1.19999999999999E+01
xN: 1.199999999999994848565…E+01
40 27 FF FF FF FF FF E4 hex =
XL: 1.20000000000000E+01
xN: 1.19999999999999502620…E+01
From the given xN values it can be seen that XL2013x64 correctly rounds the binary DP values to strings comprising 15 decimal digits, and that the functions in doubt obviously use this decimal string representation in order to calculate their result. Exactly the same kind of inaccurate results is delivered by the functions that are meant to round up, i.e. CEILING, CEILING.MATH, CEILING.PRECISE (and even ISO.CEILING), and ROUNDUP. In this example, all the 28 results corresponding to input values from 40 28 00 00 00 00 00 01 to 40 28 00 00 00 00 00 1C are flawed. Moreover, the functions ROUND and MROUND are also subject to this kind of flaw if their arguments are close to values at which their result should change.

On the other hand, the functions EVEN, MOD, ODD, and QUOTIENT deliver results that accurately account for the least bit of their arguments. Thus, in order to define an accurate version of the mathematical floor function, you might want to use = QUOTIENT(CellRef;1) + IF(MOD(CellRef;1)=0;0;(SIGN(CellRef)-1)/2), whereas for an accurate version of the mathematical ceiling function, you might want to use = QUOTIENT(CellRef;1) + 1 + IF(MOD(CellRef;1)=0;-1;(SIGN(CellRef)-1)/2). Or, you break free and look out for another numerically much more comprehensive programming language than VBA is, but that can be connected to XL relatively easy. At least, that is what I shall do from now on.

VBA speed in XL2013x64 on Win7x64

Posted in Computer, Microsoft Excel, Software, Software tools, VBA, xNumbers on April 13, 2014 by giorgiomcmlx

When playing with my newly acquired XL2013x64 and xNumbers on a i7-16GB-RAM PC running Win7x64prof , I experienced a dramatic loss of speed as compared to that I used to observe with XL2010x64 on my old i5-4GB-RAM PC running an identical OS. Some search on the Web proved that this phenomenon is wide-spread, see e.g. here.

For example, when copying the formula =xSqr(xAdd(xMult("2";B4;200);"";200);200) (in cell c4; b4:b105 contain 1,2,3,…,101) down a hundered times over existing data in the range c5:c105, it takes an unbelievable amount of 7 seconds before the results appear. Moreover, this time span is nearly independent from the multi-threading settings. On the contrary, erasing all the data in cells c5:c105 manually before copying down the formula makes the results appear instantly. This phenomenon is reproducible, and it does not depend sensitively on the kind of a formula, at least on my PCs.

So quite obviously, writing to a non-emtpy cell is the primary cause for the observed speed penalty of VBA in XL2013. At the moment, I guess XL2013 saves any content it finds in a cell that is to be overriden due to some built-in web collaboration features. I’ll have to figure out how to switch them off in case my guess is correct.

cos in XL14 or how not to do argument reduction

Posted in Computer, Microsoft Excel, multi-precision calculations, Programming Languages, Software, Software tools, Spreadsheet, Uncategorized, VBA, xNumbers on March 12, 2014 by giorgiomcmlx

While playing around with formulae in the area of computational geometry, I’ve noticed a severe flaw in the way XL14 implements the cosine of numbers close to π/2. Let XL14 and VBA7 denote results obtained from calculation in XL14 and VBA7, resp.. All final results are double-precision floating-point numbers that are uniquely characterised by their hex notation, which can be obtained by the "dbl2hex" function of the XL-add-in xNumbers. The "dcstr" function of this add-in allows for obtaining the complete sequence of decimal figures corresponding to any binary floating-point number. XN60 refers to multi-precision results returned by the XL-add-in xNumbers version 6.0.5. The letter c indicates cosine values. For example, I found:

x_XL14_wks = 1.57079632679461E+00

x_XL14_dec = 1.57079632679460701183415949344635009765625
x_XL14_hex = 3F F9 21 FB 54 44 28 00
c_XL14_wks = 2.89607422244986E-13

c_XL14_dec = 2.89607422244986256743004560121335089206695556640625E-13
c_XL14_hex = 3D 54 61 1A 80 00 00 00
c_XL14_err = 496815802 ULPs
c_VBA7_wks = 2.89607397162198E-13

c_VBA7_dec = 2.89607397162198205938334649267199577700980517303008809903985820710659027099609375E-13
c_VBA7_hex = 3D 54 61 1A 62 63 31 46
c_VBA7_err = 0 ULPs
c_XN60_dec = 2.89607397162198193401344438286352963771616699886035390348378210024734360417199…E-13

Applying the "x2dbl3" function of xNumbers, which implements the to-nearest-ties-to-even rounding scheme, to c_XN60_dec yields a result identical to that obtained in VBA7 directly. So I had to find a reasonable explanation for the really huge error of the XL14 cosine function around π/2.

A first clue came from the many zeroes at the end of the hex representation of c_XL14, which usually indicates heavy cancellation. Yet, there was no explicit subtraction involved, and so a hidden subtraction had to be responsible. The most likely reason for such a hidden subtraction is a procedure called argument reduction. Nowadays, most complicated functions are calculated by an algebraically simple approximation that is valid over only a small range of all possible input parameters, so that parameter values not covered by the range of such an approximation have to be mapped using some kind of relation. In case of the cosine function, there is the mathematical identity cos(x) = sin(π/2-x), which may be used for π/4 ≤ x ≤ π/2 in order to map some range of cosine input parameters to the input range 0 ≤ x ≤ π/4 of the sine function.

A first try did not explain the result:

pio2hi_XL14_wks = 1.57079632679490E+00
pio2hi_XL14_dec = 1.5707963267948965579989817342720925807952880859375
pio2hi_XN60_dec = 1.57079632679489661923132169163975144209858469968755291…
pio2hi_XL14_hex = 3F F9 21 FB 54 44 2D 18
pio2lo_XL14_wks = 6.12323399573677E-17
pio2lo_XL14_dec = 6.12323399573676603586882014729198302312846062338790031…E-17
pio2lo_XN60_dec = 6.12323399573676588613032966137500529104874722961539082…E-17
pio2lo_XL14_hex = 3C 91 A6 26 33 14 5C 07

The difference between x_XL14_wks and pio2hi_XL14 is only about 3E-13, so that the sine of the mapped argument is exactly equal to the mapped argument. The most simple reduction formula (pio2hi – x_XL14) yields a result (hex: 3D 54 60 00 00 00 00 00) that is inaccurate and does not agree withXL14 as well. The more complicated reduction formula ((pio2hi – x_XL14) + pio2hi) yields the correct result (hex: 3D 54 61 1A 62 63 31 46), but of course does not agree with XL14, too. So I used my recent finding that XL14 harnesses the extDP format when dealing with two numbers (unfortunately, its formula parser does not support inlining, as I will explain in a forthcoming post) for a last try. The binary values of both π/2 and x_XL14 are given by (V: border between half-bytes, X: end of DP mantissa, Y: end of extDP mantissa):
first 9 half-bytes of π/2 :


first 9 half-bytes of x_XL14
The first 9 half-bytes of both numbers cancel out completely.
trailing 4 half-bytes of π/2 plus extDP section:


which will be rounded up to extDP


from which the trailing 4 half-bytes of x_XL14 will be subtracted


Due to cancellation of many leading bits, the border indicators are misplaced now, but the bit sequence exactly matches that of the result obtained in XL14:

    4    6    1    1    A    8    0…
c_XL14_hex = 3D 54 61 1A 80 00 00 00

This works for all examples I’ve checked. Summarising, the cosine function of XL14 is severely flawed around π/2 (at least). "Forensic doublelogy" indicates that this flaw is caused by an argument reduction process performed using the 64 mantissa bits of the extDP floating-point format, whereas the obviously exploited mathematical identity required a full double-double approach. Fortunately, no flaws around π/2 were found in the cosine function of VBA7. The following graph illustrates the situation.

graph of errors of cosine in XL14 around pi half

The value of pio2_extDP is slightly larger than the mathematical value of π/2 (contrary, the DP value pio2hi is too small, as can be seen from the fact that pio2lo is positive!!). Thus, in combination with the structure of the exploited mathematical identity, the errors of the cosine in XL14 around π/2 are not random, but show a strong systematic component: all the cosine values for x just below π/2 are too large, and all the cosine values for x just above π/2 are too small.

To SSE, or not to SSE, that is the question

Posted in binary, decimal, gnumeric, Maxima, Microsoft Excel, multi-precision calculations, Number processing, Number systems, Software tools, Spreadsheet, VBA, xNumbers on February 25, 2014 by giorgiomcmlx

Some time ago, I discovered that in a thirty year old floating-point (FP) library the value of the mathematical expression 1 – x2 was determined by calculating 2·(1-x)-(1-x)2 for 0.5 ≤ x ≤ 1. At a first glance, this method appeared rather peculiar to me, but then I realised that the Lemma of Sterbenz (see, e.g. page 45 of this slide collection on FP calculations) guarantees that 1-x is calculated in FP without any round-off error in the specified range. The multiplication by 2 does not introduce errors as well. Yet, there are still two sources of errors: firstly, in case (1-x) has more than 26 significant bits, (1-x)2 is not exactly representable in FP, and secondly, finally taking the difference might cause a round-off error as well.

In my tests though, this old formula outperformed all other versions on average, and so I shared it with developer friends. Surprisingly, some of them responded with a statement like "The results produced by this formula do not differ from those obtained using (1-x)·(1+x)." In the end, the reason for this paradox could be identified: all my tests were executed on SSE registers, calculations on which strictly adhere to the double-precision floating-point format, whereas the calculations not showing any improvement had harnessed the intermediate extended double-precision format of the x87 stack. Modern x87-compatible FPUs provide both of these two very different kinds of intermediate FP number storage.

The SSE have been invented in favour of fast parallel data processing, whereas the x87 double-extended stack provides both 11 additional mantissa bits and 4 additional exponent bits to intermediate calculations in order to try to maintain an overall accuracy of 53 bits and to reduce the probability of intermediate under-/over-flows as well. The decision whether a piece of software uses the x87 or the SSE instruction sets has to be made up at compile time, because the generated byte code is different. Contrary, both the rounding-mode and the exception-handling properties may be influenced at runtime by changing the so called floating-point control word (FPUCW). I could not find an article on Wikipedia, so please see e.g. this collection of or this introduction to the FPUCW world. I also have come across these imho nice summaries that are written from C-programmers points of view: The pitfalls of verifying floating-point computations as well as SciLab is not naive. Unfortunately, values of the FPUCW having an identical meaning differ across platforms. For example, a value of the MS-FPUCW of hex 8001F corresponds to a GNU-GCC-FPUCW value of hex 37F …  Fortunately, the ability to change the rounding mode during runtime enables programmers to harness interval arithmetic in order to compute reliable FP error estimates of their code for both the x87 and SSE instructions sets. The rounding mode can be chosen from the set UP, DOWN, CHOP, and TO NEAREST – TIES TO EVEN, the latter usually being the default setting. Only if the x87 instruction has been selected at compile time, the precision of the calculations may be changed during runtime as well.

The 11 additional significand bits of the x87 extDP format correspond to roughly 3 decimal figures, and thus are a valuable tool in order to minimise the effects of cancellation in intermediate calculations. The 4 additional exponent bits allow for an intermediate product of up to 16 huge (tiny) doubles without causing an over-(under-)flow. Yet, harnessing the extDP format bears one dirty little secret: when the result of an intermediate calculation is transferred from the x87 stack firstly into an extDP x87 register and finally into the DP main memory, the intermediate value is subject to a double-rounding procedure. In combination with the standard rounding mode "TO NEAREST – TIES TO EVEN" this can lead to a slightly wrong result in two situations: an intermediate calculation rounds up/down to an extDP value representing an exact DP tie that is again rounded up/down to a DP value. The following tables give an example:

decimal number    DP hex    comment
+18446744073709551616.0 43F0000000000000 exact DP Num1
+6143.0 40B7FF0000000000 exact DP Num2
+18446744073709557759.0 Num1 + Num2,
not representable in DP
+18446744073709555712.0 43F0000000000001 DP_lo: nearest DP
+18446744073709559808.0 43F0000000000002 DP_hi: next larger DP
-2047.0 error of DP_lo
+2049.0 error of DP_hi

When adding Num1 to Num2 on the x87 extDP stack, the hex representation of the final DP sum in memory will be 43F0000000000002 instead of the closest DP 43F0000000000001. This can be understood by analysing the binary representation of the numbers involved. In order to clearly show the rounding effects, some separators are used: a "V" indicates the border between any two DP half-bytes, each of which corresponds to a single character in the hex representation, an "X" shows where the DP bits end and the extDP bits begin, and finally a "Y" marks the end of the extDP bit sequence.

Num1 + Num2
same with separators
1st step: round (up) to extDP
2nd step: round DP tie (up) to even

In this example, the addition creates a bit sequence, the mantissa of which exceeds the 64 bits of the extDP format. Thus, in a first step, the result is rounded to 64 bits. These extDP bits happen to represent a so-called tie (a number exactly half-way between the two adjacent DP numbers) of the DP format. In this example, the tie-breaking rule TO EVEN finally causes another rounding up procedure, so that the value stored in memory does not represent the DP number closest to the mathematical result of the summation, but the next larger one. Yet, it is still one of the two DP numbers bracketing the true value.

Only two different bit sequences among the 2^13 possible ones lead to this kind of error:
leadingbitsIXOIIIIIIIIIIYItrailingbits ++
leadingbitsOXIOOOOOOOOOOYOtrailingbits —

so that the odds ratio of this error should be expected to be close to 1/2^12 = 1/4096 ≈ 2.44E-4 . Of course, if the bit sequence of an intermediate result does not exceed the 64 bits of the extDP format, there will be only a single rounding step yielding a correct result. When analysing the results of 5.75E5 renormalisation steps of double-double numbers in VBA7, the sum of which is determined to exceed the 64 bit extDP limit, and that were introduced here at the end of this post, I found 1480 errors and thus an odds ratio of about 2.57E-4. Due to the relatively large sample size, I guess the deviation from the theoretical value is caused by non-randomness of the bit sequences involved.

Double-rounding is unavoidable when using the extDP registers of the x87 FPU, but it only slightly increases the representation error of about 0.5 ULPs by at maximum 0.05&percent;. Moreover, this only happens with an odds ratio of about 1/4096. Thus, this disadvantage of rarely occurring false double rounding is usually more than counterbalanced by the gain of 11 bits of intermediate precision in every calculation. However, there is a special kind of multi-precision software that cannot be run on the x87 stack due to the double rounding, i.e. all the double-double as well as quad-double software that use chained DP variables in order to store multi-precision values (see, e.g. this implementation). The double rounding on the x87 stack can lead to violations of the basic assumption of this kind of software, i.e., that the leading DP value always represents the DP value closest to the multi-precision value stored in a chain of doubles. Therefore, arbitrary precision tools being determined to use the x87 stack, for example Xnumbers, need to harness completely different algorithms for internal number storage. The numerical trick using a double-double representation of π/2 introduced in the aforementioned post works reliably, because the firstly calculated intermediate difference does not involve more than 64 significant bits.

The FP instruction set actually used by third-party numerical software is often unknown and needs to be determined by using test cases. In order to avoid problems with the digits chopping feature of most spreadsheet software, examples need to be created that use exact DP values corresponding to less than 14 significant decimal digits. The previously given example can be reformulated as sum of +1.8446744E+19 ≡ 43EFFFFFFDDAD230 and +73709557759 ≡ 4231296E97FF0000. In order to check the DP result for correctness without any debugging capabilities like those provided by Xnumbers for MS XL, e.g., both DP numbers that were summed up before now need to be successively subtracted from the sum, the bigger one at first, and the smaller one afterwards. It is important to assure that all intermediate differences are rounded to DP, which can most easily be achieved by assigning them to different DP variables, or different cells in a spreadsheet. Contrary to the first addition, the two final subtractions do not introduce any FP error, because they are subject to the lemma of Sterbenz. A final result of -2047 corresponds to calculations that have not been executed on the x87, whereas a value of +2049 indicates the opposite. Here are my PY3- and CAS-Maxima scripts:

Python 3     CAS-Maxima
num1 = +1.8446744E+19 +1.8446744E+19;
num2 = +73709557759.0 +73709557759.0;
sum12 = (num1+num2) %o1+%o2;
sum12sub1 = (sum12-num1) %-%o1;
sum12sub1sub2=(sum12sub1-num2) %-%o2;

The following table summarises the results I found in some spreadsheet and scripting software running on MS WIN7x64prof. On Linux, all SW listed in the table that is available on that OS as well returns -2047.

spreadsheet SW   result        scripting SW   result
GNUmeric 1.12.9 +2049 PY2.7.3×64 -2047
GScalc 11.4.2 -2047 PY3.2.3×64 -2047
KSspread 9.1.0 +2049 R2.15.2×64 -2047
LOcalc -2047 R3.0.0×32 +2049
LT SYM 3.0.1 -2047 R3.0.0×64 -2047
MS XL14/VBA7 +2049 FreeMat 4.2 -2047
SM PM 2012 -2047 Maxima x32 +2049

As you can see from the table, the situation is completely messed up. It seems to me as if on the Win32 platform the x87 with its extDP format was used by default, whereas on the Win64 platform the SSE part of the FPU was involved. Of course, any software can override the default setting, and MS XL64 as well as VBA7 indeed do so. Generally speaking, specifying the FPUCW as well as the kind of FPU during compilation is a very good idea in case of numerical libraries that were developed and optimised for only one of the two scenarios. In case these two setting are omitted and the unknown defaults are used at runtime, small deviations are to be expected only in case the software was designed not to harness the extDP precision of the x87 stack. Otherwise, SSE lacking the additional 11 bits of precision will very likely cause things like polynomial approximations to special functions to fail at runtime. In particular, source code of libraries originally designed for the x87 must not be re-compiled for SSE without closely inspecting the results. Libraries that were built before the advent of the x87 can be a very valuable source of inspiration for changes in the code that are likely to become necessary.

How to call a MSVCpp2010x64-DLL from XL14x64

Posted in C++, Compiler, Computer, Microsoft Excel, Programming Languages, Software, Software tools, Spreadsheet, VBA on December 20, 2012 by giorgiomcmlx

Belonging to the group of people that sociologists call "early adopters" usually is a burden with respect to newly invented technologies, and unfortunately, I cannot avoid this situation all of the time… Fortunately, nowadays it is possible to search for suitable keywords on the Internet, and quite often, this strategy is much more effective in order to solve a problem than reading through manuals. Rarely, the result of such a search will be a thick searchable manual that covers all aspects one is interested in. A recent example of this situation occurred when I wanted to call functions in a self-compiled C++ DLL from VBA7, i.e., the macro language than comes with the 64bit version of Microsoft Office 2010.

The first thing I realised was that it is impossible to call a 32bit DLL from a 64bit application directly (and vice versa). Then it turned out that the so-called express edition of Visual Studio 2010 does not contain 64bit versions of the compilers. Therefore, I had to follow the advice provided by MS and install the WindowsSDK version 7.1 in order to enable building x64 DLLs. Unfortunately, I got this finally working only after having previously de-installed all of the VS2010express stuff present on my PC, because otherwise, the installation of the WinSDK7.1 failed at an early stage (may be due to some updates…). After having re-installed all the desired languages in VS2010Express and afterwards the WinSDK_7p1 as well, I was happy to find VCpp2010Exp working as expected.

VCpp2010Exp provides a project template meant for creating DLLs: select file → new → project → Win32 → Win32-project, enter a name, and click ok. On the following welcome screen, do not click the finish but the continue button. A properties window will then be displayed in which you select the DLL radio button, and check the box labelled export symbols as well. Clicking the finish button will make VCpp2010Exp create a DLL project template that contains two files: the source code in *.cpp, and the header information in *.h . You’ll need to edit both of them later. Now the target platform has to be switched from Win32 to x64 in two steps. Firstly, select project → properties, change the content of the configuration button to all configurations, then select configuration properties → general in the list displayed on the left, change the entry for platform toolset in the general section of the list displayed on the right from v100 (or v90 or …) to Windows7.1SDK, and click the apply button. Secondly, after VCpp2010Exp has finished applying this change, click the configuration manger button and set the x64 platform (it does not matter whether the debug or release configuration is active) by clicking on platform, selecting new, changing the entry of new platform to x64, clicking the ok button, and finally clicking the close button of the configuration manager.

I decided to start with a very simple x64 DLL that only provides two different functions, i.e., a function of type integer (in VBA7, this corresponds to a function of type long), and another function of type void (corresponding to a sub in VBA7). Both functions calculate the product of two numbers provided as input variables; the integer function returns the result as its own value, whereas the void function returns it as value of a third variable, which consequently must be transferred by reference (in the declaration, an ampersand has to be noted directly in front of the variable’s name in Cpp, and the ByRef keyword in VBA7).

The C++ file named DLL_Test.cpp thus reads:

#include "stdafx.h"
#include "DLL_Test.h"

DLL_TEST_API int DLL_Mult_FNC(int a, int b)
return a*b;

DLL_TEST_API void DLL_Mult_SUB(int a, int b, int &c)
c = a*b;

The meaning of DLL_TEST_API is defined in the header file DLL_Test.h together with some other very important settings:

#define DLL_TEST_API __declspec(dllexport)
#define DLL_TEST_API __declspec(dllimport)

extern "C"
DLL_TEST_API int DLL_Mult_FNC(int a, int b);

DLL_TEST_API void DLL_Mult_SUB(int a, int b, int &c);

The block of macro-defining statements at the beginning has been added by VCpp2010Exp automatically, and saves a bit of typing if many functions are to be exported. The magic extern "C" block around the function declarations has been added manually after finding out that otherwise the names of the exported functions are so badly mangled that they cannot be called from VBA7. Imho, the tool Dependency Walker is quite valuable in such a situation. The hint that name mangling could be the reason for my DLL functions not yielding any result on the first try was mentioned in a post on Stack overflow, in which user arsane posted a link to a very comprehensive survey of calling conventions.

In VBA7, the functions need to be declared in a standard module; the name of a function in VBA7, which directly follows the function or sub keywords, has to be different from the name exported by the (C++) DLL, which has to be given directly after the Alias keyword.

Option Explicit

Declare PtrSafe Sub SUB_AmultB Lib "insert-correct-path-to-dll-here\DLL_Test.dll" Alias "DLL_Mult_SUB" (ByVal ia&, ByVal ib&, ByRef ic&)
Declare PtrSafe Function FNC_AmultB& Lib "insert-correct-path-to-dll-here\DLL_Test.dll" Alias "DLL_Mult_FNC" (ByVal ia&, ByVal ib&)

Function WS_DLL_Mult_SUB&(ByVal ia&, ByVal ib&)
Dim ic&
Call SUB_AmultB(ia, ib, ic)
WS_DLL_Mult_SUB = ic
End Function

Function WS_DLL_Mult_FNC&(ByVal ia&, ByVal ib&)
WS_DLL_Mult_FNC = FNC_AmultB(ia, ib)
End Function

Do not get confused by the completely different meanings of ampersands in C++ and VBA7, resp.: in C++, a leading ampersand denotes a call by reference, whereas in VBA7, a trailing ampersand defines the variable to be of type long.

In case you find this recipe is not working for you, please let me know…

Mathematical constants in program code

Posted in binary, Compiler, Computer, decimal, hexadecimal, Mathematics, multi-precision calculations, Number systems, Software, Software tools, xNumbers on October 2, 2012 by giorgiomcmlx

This text is published under the Creative Commons’ non-commercial share-alike (CC-NC-SA) license V3.0. It was created by GiorgioMCMLX at carolomeetsbarolo on in October 2012.

In source codes concerned with numerical subjects, usually the values of all frequently used expressions that do not change during execution time are precomputed externally and assigned to variables as a sequence of decimal figures. For example, in some fictious programming language that lacks an internal constant corresponding to the mathematical constant π, such a statement could look like

Declare Constant pi_on_3 as Double Precision = 1.04719755119659775E+00_8B

The mathematically exact value of π/3 = 1.047197551196597746154214461… comprises incountably many decimal figures. Thus, it was decided to round it to a precision of 18 decimal significant figures in the so-called scientific notation. The variable pi_on_3 is defined to be of the type double precision (DP) binary floating-point (FP) that uses 8 bytes = 64 bits for each value: 1 sign bit, 11 exponent bits, and 52 plus 1 implicit significand bits. This is the most widespread FP type used in numerical code nowadays. At a first glance, the declaration seems to be quite natural, but it neglects the very property of π that led to its invention: π is a so-called irrational number; moreover, it even belongs to the set of transcendental numbers among them. Consequently, π and all of its multiples definitely cannot be represented exactly by any of the floating-point (FP) variable types used in computers, because all their values are binary rational numbers that all correspond to decimal rational numbers. Nearly all mathematical constants belong at least to the group of irrational numbers. Otherwise, their invention would not have made much sense…

From the point of view of a compiler, 1.04719755119659775E+00 is not a number but a string consisting of decimal characters. The transformation of such numeric strings found in code into values of numerical variables is an example of what is called parsing. When a compiler meets a statement like that given above it will call its parser in order to determine a DP number that will substitute the value of the decimal string in the calculations. Parsing of decimals will almost always cause round-off errors (a special form of the more general quantisation errors), because the rational DP numbers form a sparse set of isolated points within the continuum of all the real numbers. As a consequence, all the incountably many real numbers comprised by a finite interval need to be parsed onto a single DP number.

Usually, it is tried to parse a numerical string to the nearest DP. Then, the magnitude of the quantisation error is somewhere between zero and half the distance between a pair of consecutive DP numbers. This full distance is called an ULP (unit in the last place). It is a relative measure: it doubles its absolute value at all the DP numbers that are exact powers of two. Each DP number DPval represents the interval of real numbers from DPval-0.5ULPs to DPval+0.5ULPs. Disregarding exact powers of two, the absolute distance between a pair of consecutive DP numbers is equal to the length of the interval that both DP numbers stand in for. Parsing a numerical string to the nearest DP thus
limits the unavoidable quantisation error to ±0.5ULPs. The absolute value of an ULP depends on the exponent of the DP number it is associated with: 1 ULP = 2^(LPo2-52), where LPo2 denotes the true leading power of two that is encoded in biased form in the exponent. LPo2 equals zero for all the DP numbers found in the interval [1;2); in this range, 1 ULP = 1/2^52 ≅ 2.22…E-16. This special absolute value is called machine epsilon.

Consequently, all exact binary DP numbers can be uniquely identified if at least 17 decimal significant figures (SFs) are provided. Unfortunately, this theorem seems to be widely misunderstood. In particular, it does not mean that any given numerical string consisting of 17 decimal SFs uniquely identifies a binary DP number. The limit of 17 decimal SFs only holds for parsing numerical strings the values of which correspond to exact DPs, for example when printing the results of DP calculations. In the other direction, for example when parsing numerical strings to DP numbers in order to specify the input of DP calculations, at maximum about 770 (seven-seven-zero, no typo!) decimal SFs might be needed. Thus, the fictious programmer, who might even have thought that specifying 18 decimal SFs is more than sufficient, might be terribly wrong.

For any parsing-to-nearest-DP process to succeed, it is necessary that the value of a given numerical string can be discriminated from the border between the quantisation intervals of a pair of consecutive DPs, which is called a "tie". Obviously, the rule "round/parse to nearest" has to be complemented by a tie-breaking instruction. Usually, this is "ties to even", and means that in case of a tie the DP number having an even binary representation is chosen, i.e., the trailing bit of which is zero. Ties formally involve just one additional bit as compared to the pair of DP numbers they separate; if the power of two that bit corresponds to is negative, this one extra bit exactly requires one extra decimal SF. The approximate relation between the total amount of decimal significant figures SF_dec of an exact DP and its leading power of two LPo2 has already been estimated my means of linear regression in a previous post:

SF_dec ≅ - 0,699*LPo2 + 52,5  ; LPo2 ≤ 52
SF_dec ≅ + 0,301*LPo2 + 0,496 ; LPo2 ≥ 52

For the most negative normalised value of LPo2, i.e. -1022, this formula predicts SF_dec ≅ 767. Consequently, at maximum about 770 decimal SFs assure that any numerical string having a value within the range of normalised DP numbers can definitely be parsed to the nearest DP number. Unfortunately, there is no way of predicting the amount of SF_dec that are really needed for a given decimal string without parsing it. The maximum amount will only be needed if the number is extremely close to a tie. On the other hand, if the number is close to an exact DP, 17 digits would do the job. So the scaling law that governs the amount of SF_dec actually needed for parsing-to-nearest-DP of a given decimal number needs to worked out.

The distance between a given decimal number and the nearest DP tie can be calculated harnessing multiple-precision software. All of the calculations presented here were done using the FOSS XL-Addin called xNumbers. A simple VBA-script was used to output the properties of the multi-precision results obtained when dividing π by all integers between 1 and 2E+7. In order to assure that the absolute value of the relative measure ULP remained constant, all results were scaled by a power of two so that all the final results are found in the range between one and two. This kind of scaling does not change the relative position of a decimal number with respect to the quantisation interval of the nearest DP, because these intervals just scale likewise! The scaled results were then ordered by decreasing distance to the nearest DP tie. In xNumbers, there is a built-in function "dgmat" that outputs the amount of matching digits of two xNs. Adding one to the result of "dgmat" thus yields a good estimate of the amount of significant decimal figures that would allow for correctly parsing each of the scaled results to the DP nearest to it. The following table summarises some of the results.

amount of decimal significant figures for selected multiples of pi

It can be seen that 19 decimal SFs enable parsing π/3 to the nearest DP, whereas for π/59425 23 SFs are needed. Of course, you should be aware that a "stupid" parser might use not all the figures and thus yield an incorrect result! In order to find the scaling law that relates the distance dist_tie between a decimal number and its nearest DP tie to the amount SF_dec of decimal SFs that needs to be specified for enabling parsing to nearest DP, the following figure depicts the values found in the last two columns of the preceding table (orange rhombuses). The normalised quantity dist_tie takes values between zero (exact tie) and one (exact DP); its values are computed by removing the sign from the non-normalised values as well as by dividing them by 0.5ULPs.

relation between distance to tie and amount of decimal significant figures for selected multiples of pi

The line drawn in blue colour corresponds to the finally deduced scaling law:

SF_dec ≅ 17 - log10(dist_tie)

Summarising, the amount of decimal significant figures to which a mathematical constant has to be specified in order to enable parsing-to-the-nearest-DP depends on the distance between this constant and its nearest DP tie. Unfortunately, this distance is not known until the constant has been parsed, so that this has to be done in the multi-precision environment in which the constant is calculated! This solution is mathematically correct but practically unsafe and stupid as well, because all the parsing work would have to be done at least twice: once when writing the code and once again during code execution. A simple improvement is obvious though: instead of using the probably very long numerical string corresponding to the mathematically exact value of a constant, simply the numerical string comprising the 17 decimal significant figures of the nearest DP is to be used.

This method can still be greatly improved by avoiding any parsing at run-time, which can be achieved by reading/writing bit patterns directly from/to the memory of a computer. All major programming languages provide means for this kind of manipulation. It usually converts a DP number into 16 hexadecimal figures of which each corresponds to 4 bits, and vice versa. Unfortunately, two different schemes co-exist that define the ordering of the 8 bytes of a DP number: little-endian and big-endian byte order. Within a byte, the bits always follow the big-endian scheme. Thus, most humans like to read these
hexadecimal numbers in big-endian notation, and that is why it is used here. Of course, the endianness of stored hexadecimal strings should be fixed (e.g., to big-endian), and the conversion routines should account for the endianness of the machine they are executed on. Concluding, the statement given at the beginning of this post should be replaced by something like this:

Declare pi_on_3 as Double Precision
pi_on_3 = HEX2DP("3FF0C152382D7366")

where HEX2DP calls the function that converts the 16 hexadecimal figures into a DP number. The code of this function depends on the programming language as well as on the hardware platform.

Catastrophic cancellation in action

Posted in Computer, Fortran, Maxima, Microsoft Excel, mpmath, multi-precision calculations, Programming Languages, Python, Software, Software tools, Spreadsheet, VBA, xNumbers on July 20, 2012 by giorgiomcmlx


In order to obtain the reference values needed for checking results of numerical software that use data types like double precision (DP), I like to harness three multiple-precision (MP) tools, i.e., xNumbers, Maxima, and PY-mpmath. If all the MP results agree to the last decimal digit, I take them for granted because it seems very unlikely to me that the mentioned tools are mutual clones of each other. Moreover, even effects caused by routines of an OS can be identified because mpmath as well as Maxima are supported on Linux and Windows as well. When saving the MP results to CSV-formatted files, all the MP numbers need to be preceded by a guard character in order to prevent the spreadsheet software, which will later be used to carry out the comparison, from stealing most of the decimal digits. Unfortunately, even parsers of high-level programming languages fail astonishingly often when converting numerical strings into the values of DP variables. Therefore, I usually save not only DP results, but their bytes in hex notation as well. According to my experience, these hex values provide the only reliable kind of how to represent data meant for exchange between any two different computer environments. It is particularly useful in case of floating-point numbers being much smaller than one, because in decimal notation a really huge amount of decimal figures may be necessary in order to uniquely denote a specific DP value.

The tangent function near π/2

The tangent function has a so-called pole () at π/2, i.e., with x approaching π/2 from below, tan(x) tends to infinity, and with x approaching π/2 from above, tan(x) tends to minus infinity. This behaviour contradicts the maximum-smoothness paradigm frequently used when designing an algorithm for a built-in function. Near π/2, correctly approximating the tangent function can only be achieved by using a model function that has a pole at π/2, too. Though it is not uncommon for built-in functions to use several different polynomial representations over the range they are defined on, the usage of model functions having poles is surprisingly rare.
For example, many people have realised that the tangent functions of MS-Excel&register;™ 2002 and VBA6 as well lack the change of modelling paradigm in their model functions. Consequently, over some interval around the mathematically exact value of π/2, they both yield very in-accurate values for tan(x). In the next figure, the effect is demonstrated for the 49 DP values bracketing the exact value of π/2.

The maximum error is found outside the range of this figure; it occurs at pio2_DP, by which the DP value closest to π/2 is denoted here, and which is larger than π/2 by about 1.22E-17. In case of XL2002, the maximum relative error is found to be about -4.1E-4, in case of VBA6, it is 3.3E-5. Obviously, different model functions are being used for the tangent function in XL2002 and VBA6, resp. The relative errors of the tangent function in XL2002 are about 12.4 times bigger than those in VBA6 and of opposite sign! This means XL2002 can hardly be used to try out an algorithm in a spreadsheet before encoding it in VBA6… interesting…

Some people obviously know of the trigonometric identity tan(x)=1/tan(π/2-x), and have tried to use it in order to obtain a more reliable value of tan(x) in case x is very close to π/2. Applying this trigonometric identity naively to a value of x near π/2 in DP even leads to relative errors that are at least about 5 orders of magnitude bigger than those found in case of the flawed built-in tangent functions are. Thus, defining w_DP = pio2_DP – x_DP and then using 1/tan(w_DP) as a substitute for tan(x) is a very bad idea, as can be seen from the next figure.

From comparisons to results of MP software, it is known that around zero the tangent functions in XL2002 and VBA6 as well yield identical values that moreover are correct to DP precision. This means there must be some odd effect that renders the difference pio2_DP – x meaningless in case x is close to pio2_DP. This effect is easily identified: it is called catastrophic cancellation. π is an irrational number that definitely cannot be represented exactly by the sparse set that all the rational DP values form within the dense set of real values. Mathematicians would say something like “The set comprising all the DP values is of measure zero within the set of real numbers”, by which they mean that the probability any given real number can be exactly represented by a DP value is almost always zero.
Imho, this points out one of FORTRAN’s really big crimes: calling a data type “real” that in fact can only represent rational numbers; moreover, it can represent even only a tiny subset of all the rational numbers. Nevertheless, the set of DP numbers has been cleverly designed in order to cover a range of numerical values that should be sufficient for almost all numerical efforts of humans. Unfortunately, this claim does not hold true for the precision of the DP numbers.

The DP value nearest to π/2, pio2_DP, is found to be 3F F9 21 FB 54 44 2D 18 in big endian DP hex notation, which corresponds to about +1.570796326794896558 in decimal notation. Theoretically, exactly identical problems should occur around pi_DP, which is represented by 40 09 21 FB 54 44 2D 18 or about +3.141592653589793116, resp., because not only the error will double from 6.12E-17 to 1.22E-16, but the distance between adjacent DP number will do that too, as 1 < π/2 < 2 and 2 < π < 4. The next figure provides the position of π in the three floating-point formats SP, DP, and QP. All the differences are given in ULPs, an abbreviation that means “unit in the last place” and thus is a relative measure. The corresponding absolute measures are given in the figure. There is one important detail: in SP, which was the dominant floating point format until about 1985, the value nearest to π was larger than the mathematical value, whereas in DP as well as in QP it is smaller than π. This caused much trouble when code was ported from SP to DP about thirty years ago… and fortunately will cause much less trouble when code is ported from DP to QP in the near future.

It is now clear why the cancellation that occurs when π/2-x is calculated as pio2_DP-x_DP is to be called catastrophic: the closer this difference gets to zero, the more prominent the consequences of the in-accuracy in pio2_DP will be. Fortunately, the cure is easy enough: simply provide all the bits lost in cancellation in order to keep the result accurate to the full precision of 53 bits. This can be achieved by defining the DP variable v_DP, which is meant to represent the DP values of π/2-x, as follows: v_DP = [(pio2_DP – x_DP) + pio2_DPcorr]. Therein, pio2_DPcorr contains the trailing 53 bits that need to be added to pio2_DP in order to yield an approximation to π/2 with a precision of 106 bits. Any bits that are lost due to cancellation in the difference (pio2_DP – x_DP) will be filled in from pio2_DPcorr, thus maintaining a precision of 53 bits under all circumstances. Consequently, 1/tan(v) does not show any systematic errors and should be used to replace tan(x) in the region where it is flawed, as can be seen from the next figures.

According to my experience, the tangent function in XL2010 is still flawed around π/2, but the one in VBA7 is not. Keep in mind that mathematically unnecessary brackets can really matter in Excel, i.e., in case x_DP is very close to pio2_DP, entering something corresponding to “=(pio2_DP – x_DP)” in a cell will produce exact results, whereas “= pio2_DP – x_DP” will yield a large interval filled with zeros around x_DP = pio2_DP.

In general, it is hard to predict whether a software implements a tangent function that is precise around π/2 or not. In order to enable you to check for that, here’s to you the two DP values bracketing π2 together with the correct DP values of the tangent function at these values:

x_lo_DP : 3F F9 21 FB 54 44 2D 18 // +1.570796326794896558
x_hi_DP : 3F F9 21 FB 54 44 2D 19 // +1.570796326794896780
tan(x_lo_DP)_DP : 43 4D 02 96 7C 31 CD B5 // +16331239353195370.0
tan(x_hi_DP)_DP : C3 36 17 A1 54 94 76 7A // -6218431163823738.0

In case you would like to implement your own tangent function using the helper variable v as defined above, you need to subtract your x from the value of x_lo_DP given above; the DP that needs to be added to that result in order to maintain the full precision of 53 bits is:

pio2corr : 3C 91 A6 26 33 14 5C 07 // +6.123233995736766036E-17

The level of precision of such pairs of DP values is often referred to by the term “double-double” (DD). In this context, replacing a single line of code in DP precision by its equivalent in DD made the algorithm work properly. As long as the IEEE QP floating-point data type is not widely available in numerical software, using libraries written in DD may solve many problems that are caused by catastrophic cancellation in 53 bits. Depending on whether the programming language used to implement such a library makes use of the high-precision internal registers of modern CPUs or not, some care must be taken in order to make them work properly. Imho, VBA does not use them, but modern versions of FORTRAN, C, and Python etc. do. In this case, the line v = (pio2_DP – x_DP) + pio2_DP_corr would need to be modified in order to assure that the difference (pio2_DP – x_DP) is rounded down to DP precision before it is used again in the addition part. Thus, it has to be pulled out of the CPU and written to memory, from where it is to be re-read then.

Obviously, the procedure of partial enhancement of precision as described in this post has a variety of applications. One of the most important examples of where it is necessary to be implemented is the so-called argument reduction that is frequently applied in all subjects that deal with periodic processes.