Thatâs what I understood. And that is why I mentioned a pair of reasons why having float matching C++'s double is useful. In my opinion, more useful than not being surprised by 1e23 == 10 ** 23 being false.
I know some, not all, of the differences. The objection resides on the ways that they are the same, the finite precision, the adherence to IEEE 754 to whichever extend they do.
And it doesnât matter which other language. I mentioned C++'s double and Python float because that is what I am using right now and exactly for the purpose I mentioned.
Having unlimited precision, is a fundamental change on the arithmetic. I wouldnât be able to use float for my purposes unless I have a way to switch it off. Gaining 1e23 == 10 ** 23 doesnât seem to be worth it.
The rules of arithmetic are completely different. For example, one would get back all sorts of properties like (a + b) + c = a + (b + c) that float donât have. Having or not this property is not good or bad on its own, of course. It depends on what you want to do. What would be bad, at least for me, is not having the type that behaves as finite precision does.
So, if you define a significand and an exponent very large, you have a de facto infinte precision.
Furthermore, Python floats does not adhere to IEEE 754. decimal does.
Franklin, itâs a float, the rules are the same as the other floats. I re-quote myself:
PS: I canât post links again⌠my posts are magically moved, I can sometimes post links, sometimes no, sometimes I canât post more than three posts in a thread, sometimes yes. This is a funny unpredictable forum X-D
It is not only about the values that float can contain. Now define its behavior, how should +, -, /, *, and perhaps other operations like sqrt behave.
It just isnât. The equation 2x=2y has different sets of solutions, for example.
This is supposed to support âthe rules are the sameâ? If the float transitions seamlessly from the double precision floating point binary, to the boxed integer it is already not behaving like a finite precision floating point. It doesnât serve me to do finite precision floating point arithmetic, unless I can switch off that behavior (like your quote from Wikipedia says âspecifying p and emaxâ).
The properties of all floating point arithmetic are not the same for different b, p, emax, and these are not the same for a floating point that can seamlessly transition between different values of them. For example, almost all algebraic equations, donât have the same sets of solutions.
You want to be able to have True for 1e23 == 10 ** 23, but that carries with it consequences that affect more important properties to have than this cosmetic improvement. See for another example, that 1e5000, 1e5001 or any other other exponent larger than the maximum exponent should be equal in finite precision. Would they be equal in your proposal? Again, I am OK, with a separate type in which they are different, or if I can switch off that behavior, but I need the type in which they are the same.
What benefit would arise from this new âfloatâ? What can I do with this 1e23 âfloat with integer precisionâ other than look at it and feel peaceful that it equals 10**23? What if I add an integer to it? Do I get back an int or a âfloatâ? What if I subtract 0.1 from it? Will I get back the floating point value of 99999999999999991611392.0 or something else? What would should I expect when I store the value 1e23 in a numpy array with dtype=double? Would numpy and others need to create a new dtype?
I donât think there are good answers to these questions, but am willing to consider your responses.
Python floats are a thin wrapper around your platformâs C double, or equivalent, which for all major platforms, and most minor ones, implements IEEE-754. There many be some obscure chips that donât implement IEEE-754 but I donât know of any that support Python.
Iâm sure you can guess the meaning. You know what a data structure is. You know what a hybrid is. Like a hybrid car, which uses both petrol and electric motors, and swaps between them as needed.
You want a class which uses both a 64-bit Binary64 float and an arbitrary sized int, and swaps between them as needed.
Yes, I have understood, but I donât think you have understood. Programming is not magic. You canât just change the data structure and expect the implementations of functions to remain unchanged.
Every function in the math module expects floats to be Binary64 floating point numbers, but under your scheme, sometimes they wonât be. What happens when you call math.frexp(1e23)?
The frexp function is expecting a 64-bit data structure with an 11 bit exponent and a 52-bit (plus one implicit bit) significand. What to you expect it to do when it instead gets a 77 bit integer?
What happens when you pass 1e300 and frexp gets a 364 bit data structure instead of a 64 bit one? You canât just wave your hands and say âoh but it is only an internal changeâ. Internal changes still need internal implementation changes.
What happens when you call float methods like float.hex() on one of your hybrid instances? Will (1e23).hex() return â0x1.52d02c7e14af6p+76â (as it should for a float) or â0x152d02c7e14af6800000â (as it should for the exact integer)?
[quote
This is FUD X-D More complex than an entire library?
[/quote]
Hard to say. It may not be more complex than the entire decimal library, but it will probably be more complex than the Decimal type itself.
I usually prototype an idea before I post it to this forum. Not a polished implementation, but something that can validate my thinking and be critiqued by others.
So you found the right and polite objection to my proposal.
I think the only real way to have 1e23 == 10**23 is that 1e23 will become an integer literal. This is probably the most simple and logical solution, but it will be never accepted, since