If we run the following code
while s>0 :
the last output is 10^-323
If we run the following one
while a> 1:
the last output is 10^-15
The two results appear to be inconsistent. In the second one if I change 1 to 16 (adding 4 bits)and run it again, the result is 10^-14. My question is what is the significance of the first result ? and why the second one is so different from the first one .
Python’s float type uses IEEE 754-1985 binary double precision floating point numbers. A double precision float uses 64bits: 1 sign, 11 exponent and 52 fraction bits. Epsilon is the 2**-52 == 2.220446049250313e-16. Simply speaking it gives you some information about rounding errors. The minimum value of 2.2250738585072014e-308 is a normal representation float value. You can get down to about 5e-324 with subnormal representation.
IEEE 754 floating point standard is complex. The numbers does not behave like your typical school math values. For example are also NaNs (not a number) and zero is signed.
Floating point is stored as a rational number, being an integer of a
certain with and an exponent to scale it: the numerator and the
denominator, effectively. (Well, the numerator is always a fraction
IIRC, and to gain an extra bit the leading “1” of the binary value is
assumed, and used for the sign. But these are details.)
The min is a reflection of the size of the storage for the exponent -
how negative a value can you fit in it? That expresses how small the
scale can be.
However, epsilon reflects the size of the numerator - how small the
difference between two values of these can be expressed by the width of