`1e23 != 10 ** 23` now. Possible solution: PyFloatLongObject

Go implement it. Come back when you actually know how complicated your proposal really is.

1 Like

I’m not proposing a new type. I’m proposing an internal specialized C object, completely transparent.

Is this what you really want to suggest to all people that posts an idea? DIY? X-D

That’s what I understood. And that is why I mentioned a pair of reasons why having float matching C++'s double is useful. In my opinion, more useful than not being surprised by 1e23 == 10 ** 23 being false.

Franklin, do you know that PyFloat and C double are completely different? C, not C++, since it’s CPython, not CppPython.

I know some, not all, of the differences. The objection resides on the ways that they are the same, the finite precision, the adherence to IEEE 754 to whichever extend they do.

And it doesn’t matter which other language. I mentioned C++'s double and Python float because that is what I am using right now and exactly for the purpose I mentioned.

Having unlimited precision, is a fundamental change on the arithmetic. I wouldn’t be able to use float for my purposes unless I have a way to switch it off. Gaining 1e23 == 10 ** 23 doesn’t seem to be worth it.

Okay, this seems to me a sane conversation, finally.

And why an infinite precision internal type should be a problem?

Which part of my proposal does not adhere with IEEE 754?

The unlimited precision.

The rules of arithmetic are completely different. For example, one would get back all sorts of properties like (a + b) + c = a + (b + c) that float don’t have. Having or not this property is not good or bad on its own, of course. It depends on what you want to do. What would be bad, at least for me, is not having the type that behaves as finite precision does.

I’m quoting Wikipedia:

So, if you define a significand and an exponent very large, you have a de facto infinte precision.

Furthermore, Python floats does not adhere to IEEE 754. decimal does.

Franklin, it’s a float, the rules are the same as the other floats. I re-quote myself:

PS: I can’t post links again… my posts are magically moved, I can sometimes post links, sometimes no, sometimes I can’t post more than three posts in a thread, sometimes yes. This is a funny unpredictable forum X-D

It is not only about the values that float can contain. Now define its behavior, how should +, -, /, *, and perhaps other operations like sqrt behave.

It just isn’t. The equation 2x=2y has different sets of solutions, for example.

This is supposed to support “the rules are the same”? If the float transitions seamlessly from the double precision floating point binary, to the boxed integer it is already not behaving like a finite precision floating point. It doesn’t serve me to do finite precision floating point arithmetic, unless I can switch off that behavior (like your quote from Wikipedia says “specifying p and emax”).

The properties of all floating point arithmetic are not the same for different b, p, emax, and these are not the same for a floating point that can seamlessly transition between different values of them. For example, almost all algebraic equations, don’t have the same sets of solutions.

You want to be able to have True for 1e23 == 10 ** 23, but that carries with it consequences that affect more important properties to have than this cosmetic improvement. See for another example, that 1e5000, 1e5001 or any other other exponent larger than the maximum exponent should be equal in finite precision. Would they be equal in your proposal? Again, I am OK, with a separate type in which they are different, or if I can switch off that behavior, but I need the type in which they are the same.

What benefit would arise from this new “float”? What can I do with this 1e23 “float with integer precision” other than look at it and feel peaceful that it equals 10**23? What if I add an integer to it? Do I get back an int or a “float”? What if I subtract 0.1 from it? Will I get back the floating point value of 99999999999999991611392.0 or something else? What would should I expect when I store the value 1e23 in a numpy array with dtype=double? Would numpy and others need to create a new dtype?

I don’t think there are good answers to these questions, but am willing to consider your responses.

This is incorrect.

Python floats are a thin wrapper around your platform’s C double, or equivalent, which for all major platforms, and most minor ones, implements IEEE-754. There many be some obscure chips that don’t implement IEEE-754 but I don’t know of any that support Python.

According to the decimal documentation and the developers who made it, it implements IBM’s General Decimal Arithmetic Specification.

I’m sure you can guess the meaning. You know what a data structure is. You know what a hybrid is. Like a hybrid car, which uses both petrol and electric motors, and swaps between them as needed.

You want a class which uses both a 64-bit Binary64 float and an arbitrary sized int, and swaps between them as needed.

Yes, I have understood, but I don’t think you have understood. Programming is not magic. You can’t just change the data structure and expect the implementations of functions to remain unchanged.

Every function in the math module expects floats to be Binary64 floating point numbers, but under your scheme, sometimes they won’t be. What happens when you call math.frexp(1e23)?

The frexp function is expecting a 64-bit data structure with an 11 bit exponent and a 52-bit (plus one implicit bit) significand. What to you expect it to do when it instead gets a 77 bit integer?

What happens when you pass 1e300 and frexp gets a 364 bit data structure instead of a 64 bit one? You can’t just wave your hands and say “oh but it is only an internal change”. Internal changes still need internal implementation changes.

What happens when you call float methods like float.hex() on one of your hybrid instances? Will (1e23).hex() return ‘0x1.52d02c7e14af6p+76’ (as it should for a float) or ‘0x152d02c7e14af6800000’ (as it should for the exact integer)?


This is FUD X-D More complex than an entire library?


Hard to say. It may not be more complex than the entire decimal library, but it will probably be more complex than the Decimal type itself.

1 Like

I usually prototype an idea before I post it to this forum. Not a polished implementation, but something that can validate my thinking and be critiqued by others.


Well, finally you’ve found the problem with my idea.

The problem is I thought that PyFloat was a reimplementation of double, with a mantissa, an exponent and a sign, while it’s simply:

typedef struct {
    double ob_fval;
} PyFloatObject;

So you found the right and polite objection to my proposal.

I think the only real way to have 1e23 == 10**23 is that 1e23 will become an integer literal. This is probably the most simple and logical solution, but it will be never accepted, since

  1. Python should change the grammar
  2. there could be regressions
  3. it’s complicated to parse

In other words, this entire thread could have been prevented if you’d just done your research before posting?

1 Like