Smallest positive number machine can store

If we run the following code
s=1
while s>0 :
print(s)
s=s/10

the last output is 10^-323

If we run the following one
s=1
a=1+s
while a> 1:
print(s)
s=s/10
a=1+s

the last output is 10^-15
The two results appear to be inconsistent. In the second one if I change 1 to 16 (adding 4 bits)and run it again, the result is 10^-14. My question is what is the significance of the first result ? and why the second one is so different from the first one .

The sys modules contains min/max constants for floating point numbers:

>>> import sys
>>> sys.float_info
sys.float_info(
    max=1.7976931348623157e+308,
    max_exp=1024,
    max_10_exp=308,
    min=2.2250738585072014e-308,
    min_exp=-1021,
    min_10_exp=-307,
    dig=15,
    mant_dig=53,
    epsilon=2.220446049250313e-16,
    radix=2,
    rounds=1
)

Thanks for your reply.
My question is what is the difference between min and epsilon ? why the min is different in the result of the code(10^-323) and in sys.float_info (10^-308)?

Python’s float type uses IEEE 754-1985 binary double precision floating point numbers. A double precision float uses 64bits: 1 sign, 11 exponent and 52 fraction bits. Epsilon is the 2**-52 == 2.220446049250313e-16. Simply speaking it gives you some information about rounding errors. The minimum value of 2.2250738585072014e-308 is a normal representation float value. You can get down to about 5e-324 with subnormal representation.

IEEE 754 floating point standard is complex. The numbers does not behave like your typical school math values. For example are also NaNs (not a number) and zero is signed.

1 Like

Hi Atisdipankar,

Neither calculation is an accurate way to find the smallest number
Python can use as a float.

Python floats, like nearly all computers, are based on powers of two,
not powers of 10. So dividing by 10 misses some numbers.

We can find the smallest positive number:

>>> while s/2 > 0:
...     s = s/2
... 
>>> s
5e-324

Notice this is a lot smaller than the sys.float_info.min:

>>> sys.float_info.min
2.2250738585072014e-308

which is the smallest “normalised” value. 5e-324 is the smallest
“subnormal” value.

In binary, 1e-324 corresponds to the 64-bits:

0 00000000000 0000000000000000000000000000000000000000000000000001

where the first bit is the sign bit, the middle 11 bits is the exponent,
and the final 52 bits is the significand. The next smaller value is 0.

sys.float_info.min looks like this in binary:

0 00000000001 0000000000000000000000000000000000000000000000000000

As the exponent increases, the meaning of the signiicand also increases.
(That’s what floating point numbers do.) The float 1.0 is this in
binary:

0 11111111110 000000000000000000000000000000000000000000000000000

and the next value just above 1.0 is this in binary:

0 11111111110 000000000000000000000000000000000000000000000000001

or 1.0000000000000002

We can find it like this:

>>> s = 1.0
>>> for i in range(1000):
...     s = s + sys.float_info.min * 2**i
...     if s > 1.0: break
... 
>>> s
1.0000000000000002

You can learn more about this here:

Here is an excellent way of visualising floating point numbers:

https://fabiensanglard.net/floating_point_visually_explained/

More here:

And don’t forget the FAQ:

https://docs.python.org/3/faq/design.html#why-are-floating-point-calculations-so-inaccurate

1 Like

Floating point is stored as a rational number, being an integer of a
certain with and an exponent to scale it: the numerator and the
denominator, effectively. (Well, the numerator is always a fraction
IIRC, and to gain an extra bit the leading “1” of the binary value is
assumed, and used for the sign. But these are details.)

The min is a reflection of the size of the storage for the exponent -
how negative a value can you fit in it? That expresses how small the
scale can be.

However, epsilon reflects the size of the numerator - how small the
difference between two values of these can be expressed by the width of
the integer.

Cheers,
Cameron Simpson cs@cskk.id.au

1 Like