# Questions about the Python Language and Floating Point

-Does Python have floating point errors with its float number type?

-If it does, is there any switch or kind of software option to turn off these error possibilities, at
the level of the language?

-I have heard that Python has arbitrary precision mathematics by default. How does that work?

The existence of floating point rounding errors are unavoidable. That applies to every programming language with floats, including Python.

The problem is that real numbers can contain an infinite number of decimal places (e.g. 1/3 = 0.3333 repeating forever, or π, or √2 for example) while floating point numbers can store only a finite, and fixed, number of digits. In the case of Python, floats store 53 bits or 16-17 decimal digits.

For example:

``````>>> math.sqrt(2)**2  # Should be exactly 2.0
2.0000000000000004
``````

No, you can’t just “turn off” these errors. They are an inescapable consequence of using floating point maths.

Python integers (1, 2, 3, 4, 5, 6, 7…) are arbitrary precision. Many languages limit their int class to a maximum of 9223372036854775807 (64 bits), 2147483647 (32 bits) or even 32767 (16 bits), but Python ints are arbitrary precision and can easily compute values greater than those.

Python floats are fixed precision 64-bit values the same as C doubles.

1 Like

Python has integers (the `int` type) which are “big ints”, in that they
are arbitrarily sized. You can make `int` values as larges as you like
(subject to memory limitations).

Python’s `float`s are IEEE floats. See the docs cited by Aivar Paalberg.

There are also some modules providing a `Fraction` type and a `Decimal`
type with specifiable precision for special situations.

Cheers,
Cameron Simpson cs@cskk.id.au