# Re: This calculation is just wrong / computer can't count!

Most BCD has precision limits, such as 15 digits. The only problem is that precision of
more than 15 digits, which is required for such computations as 25/30, won't work in BCD
fixed point, integer fixed point, or any other form of arithmetic that exists on computers
where other than symbolic values are used.

Generally, BCD offers no advantages over fixed-point binary arithmetic. FPB was used for
years to do graphics work; for example, I represent my coordinates as 24.8 bits of
precision. That way, cumulative roundoff won't distort images nearly as much, because I
use the upper 24 bits of the value as my coordinate, and the low-order 8 bits just "hang
onto" the roundoff error. But if I rotate something often enough using this technique,
the cumulative errors because I only have an 8-bit fraction will begin to distort it.

The LISP folks did some long-precision arithmetic in the 70s; the google search key would
be BIGNUM (when Guy Steele was in California during one summer, the length of the famous
El Camino Real ["The royal road"] was explained; he said that it was too long to be a Real
and dubbed it El Camino BIGNUM).

I once worked on a machine which had no multiply instruction; instead, multiplication was
done by a subroutine call. The machine was a decimal machine. The call was something
like

MULTIPLY(result, decimal, multiplier, decimal, multiplicand, decimal)

and the "decimal" values were the number of decimal points. For financial calculations,
we kept 3 decimal digits of precision and only printed two. That way we had all the
roundoffs balance out and the totals were approximatley correct. In 1964 it was
well-known that you could not divide 25/30 and get a precise number, and that was 43 years
ago. There was even a couple paragraphs in the manual describing the effects of decimal
precision and roundoff. The FORTRAN floating-point library I used in 1963 kept the values
in decimal (an IBM 1620) and devoted a chapter of the manual to floating-point roundoff
issues. They were no different then than they are today.

The only system that I know that actually preserves "indefinite" accuracy is the LISP
libraries that did "rational arithmetic" by keeping symbolic numerator/denominator pairs.
They understood how to add, subtract, multiply and divide rational numbers, and could even
compute sin, cos, etc. symbolically. I think MATLAB or one of those systems may still
support rational arithmetic.

The reason that the 7090/7094, PDP-6/10/20, and many other machines of the era had 36-bit
words (not 32-bit) was that John von Neumann had computed that for all real physical
problems, 36 bits was sufficient to get accurate answers, taking into account the roundoff
errors in the least significant bit. By using 9 bits of exponent and 26 bits of mantissa
(plus 1 bit of sign), he computed that this had sufficient dynamic range (exponent) and
sufficient precision (mantissa) for all realistic computations that were possible in the
foreseeable future. Like most of his work, he was absolutely right.

People like Alderson, the wizard of JPL (Niven's "Alderson Drive" was named in his honor)
understood floating-point roundoff, and he was in great demand for writing navigation code
for onboard computers for projects like interplanetary space missions, because he
understood how to cause the errors to be irrelevant instead of cumulative, so the
spacecraft arrived where it was supposed to, and not ten million kilometers off-course.

The ultimate issue here is that unlimited decimal precision rarely buys anything (except
in very esoteric cases like the Windows Calculator program), and in real situations,
binary floating point (note that a floating point multiply takes one (1) CPU clock cycle
on a Pentium, that is, 350ns on a 2.8GHz machine) always does the right thing; it is only
the responsibility of the programmer to not let errors become cumulative. And that's just
basic programming skill. By the way, I don't do floating point where these issues matter,
so I'm not an expert on how to do it; but I do know that the best efforts of a host of
people to educate the OP on this matter seems to be wasted effort.
joe

On Sat, 6 Oct 2007 21:00:02 -0700, "David Ching" <dc@xxxxxxxxxxxxxxxxxxxxxx> wrote:

I want to ask: can anyone recommend a library that produces results like a
calculator (I believe it is called fixed precision decimal)? I understand
that the floating point types in C++ are not meant for this, but what search
terms do I google to find a library that offers this functionality?

I had thought BCD (binary coded decimal) would be a good start, but there
isn't anything on sourceforge, for example that has this....

Thanks,
David

Joseph M. Newcomer [MVP]
email: newcomer@xxxxxxxxxxxx
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
.

## Relevant Pages

• Re: Precision
... Whether one does the subsequent calculations in internal ie integer format ... I have always used external format with Precision 4 since my ... floating point calcs which are done in binary have around 3% error albeit at ... positions it is IMPOSIBLE to have a result of 4 decimals with the 2 ...
(comp.databases.pick)
• Re: Linear Algebra Challenge
... Since I'm using floating point, so I'll never be able to calculate one ... floating point math set to 99 digits. ... As close as I'm willing to wait if I use arbitrary precision. ... This mode is fast; when you select arbitry ...
(comp.sys.hp48)
• Re: Floating point environment
... In all functions like fetestexcept, for instance, it is assumed that there is only one floating point environment ... one for double precision ... the 64 bit format and another for the 80 bit format. ... All versions of SSE use a status/control word separate from the main FPU, however, so they can indeed be considered a separate environment. ...
(comp.std.c)
• Re: Precision
... It's the same if we use Precision, floating point, or ... positions it is IMPOSIBLE to have a result of 4 decimals with the 2 ... May be something wrong in the floating ... remember what the associated actuarial calculations were. ...
(comp.databases.pick)
• Re: Precision
... The something wrong is that floating point binary was used. ... It's the same if we use Precision, floating point, or ... positions it is IMPOSIBLE to have a result of 4 decimals with the 2 ... We do not use Galleons or Triganic Pus so all accounting uses decimal ...
(comp.databases.pick)