Add decimal128 as the builtin decimal()

IEEE 754 defined a number of binary floating point number representations in 1985. Its 2008 version (actually there’s a new 2019 version published last month) also defined some fixed-sized, decimal floating point representations. I want to propose that the larger one of these, decimal128 becomes available as a builtin type (128 bits ought to be enough for anybody).

The advantages

  • decimal() makes more sense for beginners. int() is an integer and decimal() is a number with a decimal point.

  • just having decimal() instead of having to do from decimal import Decimal and then having to pass strings to Decimal(“123.4”) is a bit ugly/not ergonomic (and requires absolute beginners to learn about importing)

  • it removes a whole class of gotchas, bugs and edgecase code. .1 + .2 == .3 would be True

The disadvantages are of course fairly obvious

  • backwards compatibility - while most code probably does from decimal import Decimal, any code that does import decimal could have a problem.

  • while they should’ve been using numbers.Number, any code that does isinstance(val, (int, float)) wouldn’t work

  • speed - float64 is implemented in hardware basically everywhere and cycles would be wasted on calculating significant digits you might not care about.

  • memory - double the amount of memory required to store a float

  • there is already decimal.Decimal in the standard library, which allows you to set the precision of a decimal value on a per value basis. This would cause confusion.

  • interop with libraries like numpy, etc could be a problem for whatever reason

  • decimal() is 2 more letters to type compared to float()

Or maybe float() could be changed to be a decimal float instead, but I like decimal() more because the name makes more sense and backwards compatibility. Though it would be confusing because decimal128 is also a float.

There’s an implementation of decimal128 from Intel here https://software.intel.com/en-us/articles/intel-decimal-floating-point-math-library

I also think that floating point literals should become decimal floating point literals (in other words, typing 1.1 at the interpreter should return a decimal128 value), but I wouldn’t dare propose that.

1 Like

Something similar has been proposed on Python-Ideas a few years ago, for a new decimal builtin type based on the same IBM standard as the decimal module, except using a fixed number of digits rather than supporting configurable precision.

You say: “it removes a whole class of gotchas, bugs and edgecase code” but I don’t think that will be true. It will remove a single gotcha: that some numbers that we can write in decimal, like 0.1, cannot be represented exactly in binary floats. It might shift the troublesome cases away from “simple” base 10 numbers like 0.1 (which is great for working with currency!) but it doesn’t make them disappear altogether.

For example, even with 34 decimal digits, 3*(decimal(1)/3) != 1.

And for serious numeric work, binary numbers are still the preferred choice, for accuracy and speed.

Code that does “import decimal” is fine, it will just shadow the builtin. But since it won’t be using that builtin, its not a problem.

I think that for performance, accuracy and backwards compatibility, default floating point format needs to remain binary float.

A few questions:

  • should there be a literal for decimals? On Python-Ideas, I think we agreed that a “d” suffix would do the job: 0.1 gives a binary float, 0.1d a decimal.
  • how should arithmetic coercion apply with float + decimal, fraction + decimal, Decimal + decimal, etc?
  • does the math module need to support decimals too?
  • do we need a decimal version of complex too? (I expect not.)
  • there are two standard implementations, which should be used?
  • how does this compare with the IBM decimal specification used for Decimal with respect to speed, memory use and features?
2 Likes