If we run the following code
s=1
while s>0 :
print(s)
s=s/10

the last output is 10^-323

If we run the following one
s=1
a=1+s
while a> 1:
print(s)
s=s/10
a=1+s

the last output is 10^-15
The two results appear to be inconsistent. In the second one if I change 1 to 16 (adding 4 bits)and run it again, the result is 10^-14. My question is what is the significance of the first result ? and why the second one is so different from the first one .

Thanks for your reply.
My question is what is the difference between min and epsilon ? why the min is different in the result of the code(10^-323) and in sys.float_info (10^-308)?

Pythonâ€™s float type uses IEEE 754-1985 binary double precision floating point numbers. A double precision float uses 64bits: 1 sign, 11 exponent and 52 fraction bits. Epsilon is the 2**-52 == 2.220446049250313e-16. Simply speaking it gives you some information about rounding errors. The minimum value of 2.2250738585072014e-308 is a normal representation float value. You can get down to about 5e-324 with subnormal representation.

IEEE 754 floating point standard is complex. The numbers does not behave like your typical school math values. For example are also NaNs (not a number) and zero is signed.

Floating point is stored as a rational number, being an integer of a
certain with and an exponent to scale it: the numerator and the
denominator, effectively. (Well, the numerator is always a fraction
IIRC, and to gain an extra bit the leading â€ś1â€ť of the binary value is
assumed, and used for the sign. But these are details.)

The min is a reflection of the size of the storage for the exponent -
how negative a value can you fit in it? That expresses how small the
scale can be.

However, epsilon reflects the size of the numerator - how small the
difference between two values of these can be expressed by the width of
the integer.