× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



On Fri, Feb 12, 2016 at 3:05 AM, Birgitta Hauser <Hauser@xxxxxxxxxxxxxxx> wrote:
It's my definition (was translated word by word from German).
But I don't think it works differently from the rest of the world!

It absolutely works differently.

Try the same thing with very large numbers (31, 4) in any other environment.
You'll get differences either.

No, there is an international engineering standard for floating point,
called IEEE 754. This has been the predominant way of doing
calculations in the entire computing world for decades, EXCEPT in
mainframe and business computing, where it competes against fixed
point. There are very good reasons to use fixed-point arithmetic in
business. But for most scientific, engineering, and applied
mathematics use, where the goal is to model real numbers (in the
mathematical sense of real numbers, which encompasses integers,
rational numbers, and irrational numbers) in a practical way, floating
point is vastly superior, and for around 20 years or so, mainstream
CPUs have included hardware support for IEEE 754.

The core idea in floating point is that you have a fixed TOTAL
precision. Some fixed number of bits is devoted to storing the
mathematically significant digits. Then another fixed number of bits
stores the exponent. It doesn't matter where the decimal point is,
that's why it's called floating POINT, and that is the beauty of it.
The value 2.3e120 uses 2 significant digits. The value 2.3e-58 also
uses 2 significant digits. These numbers are NOT cast to (121, 0) and
(60, 59) or anything like that. You don't have enough digits in a
64-bit fixed point variable to even store such numbers. But these fit
very comfortably in 64-bit floating point variables.

The sacrifice that floating point makes is that only the highest-order
digits (wherever they happen to be in relation to the decimal point!)
are stored. For today's typical 64-bit floats, this is only about 16
digits of precision. So some (31, 4) values would NOT be accurately
stored in such a float. But float doesn't bother storing *useless*
zeros, and OP's 1.5267 is only 5 significant digits, no matter how
many leading or trailing zeros you want to tack on.

Sorry for the long response, but I think it's helpful for anyone in
computing, even in business computing, to understand what the word
"float" really means in computing. Namely, it's short for "binary
floating point", which is basically the scientific notation you
learned in secondary school, but adapted for computers.

In fact, RPG and SQL both have always had direct support for floating
point. It's type code F in DDS and RPG D-specs. It's FLOAT, REAL, or
DOUBLE [PRECISION] in SQL.

Incidentally, the intricate rules for the way RPG or SQL adjust their
precision internally for intermediate calculations on fixed point
variables could sort-of be described as "floating PRECISION" (not that
I would call it that, but at least I understand how the word "float"
might creep into it).

John Y.

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.