IEEE 754 defined a number of binary floating point number representations in 1985. Its 2008 version (actually there’s a new 2019 version published last month) also defined some fixed-sized, decimal floating point representations. I want to propose that the larger one of these, decimal128 becomes available as a builtin type (128 bits ought to be enough for anybody).
decimal() makes more sense for beginners. int() is an integer and decimal() is a number with a decimal point.
just having decimal() instead of having to do
from decimal import Decimaland then having to pass strings to Decimal(“123.4”) is a bit ugly/not ergonomic (and requires absolute beginners to learn about importing)
it removes a whole class of gotchas, bugs and edgecase code.
.1 + .2 == .3would be
The disadvantages are of course fairly obvious
backwards compatibility - while most code probably does
from decimal import Decimal, any code that does
import decimalcould have a problem.
while they should’ve been using
numbers.Number, any code that does
isinstance(val, (int, float))wouldn’t work
speed - float64 is implemented in hardware basically everywhere and cycles would be wasted on calculating significant digits you might not care about.
memory - double the amount of memory required to store a float
there is already decimal.Decimal in the standard library, which allows you to set the precision of a decimal value on a per value basis. This would cause confusion.
interop with libraries like numpy, etc could be a problem for whatever reason
decimal() is 2 more letters to type compared to float()
Or maybe float() could be changed to be a decimal float instead, but I like decimal() more because the name makes more sense and backwards compatibility. Though it would be confusing because decimal128 is also a
There’s an implementation of decimal128 from Intel here https://software.intel.com/en-us/articles/intel-decimal-floating-point-math-library
I also think that floating point literals should become decimal floating point literals (in other words, typing
1.1 at the interpreter should return a decimal128 value), but I wouldn’t dare propose that.