Go implement it. Come back when you actually know how complicated your proposal really is.

Iâm not proposing a new type. Iâm proposing an *internal* specialized C object, completely transparent.

Is this what you really want to suggest to all people that posts an idea? DIY? X-D

Thatâs what I understood. And that is why I mentioned a pair of reasons why having float matching C++'s `double`

is useful. In my opinion, more useful than not being surprised by `1e23 == 10 ** 23`

being false.

Franklin, do you know that PyFloat and C double are completely different? C, not C++, since itâs CPython, not CppPython.

I know some, not all, of the differences. The objection resides on the ways that they are the same, the finite precision, the adherence to IEEE 754 to whichever extend they do.

And it doesnât matter which other language. I mentioned C++'s double and Python float because that is what I am using right now and exactly for the purpose I mentioned.

Having unlimited precision, is a fundamental change on the arithmetic. I wouldnât be able to use float for my purposes unless I have a way to switch it off. Gaining `1e23 == 10 ** 23`

doesnât seem to be worth it.

Okay, this seems to me a sane conversation, finally.

And why an infinite precision internal type should be a problem?

Which part of my proposal does not adhere with IEEE 754?

The unlimited precision.

The rules of arithmetic are completely different. For example, one would get back all sorts of properties like (a + b) + c = a + (b + c) that `float`

donât have. Having or not this property is not good or bad on its own, of course. It depends on what you want to do. What would be bad, at least for me, is not having the type that behaves as finite precision does.

Iâm quoting Wikipedia:

The standard specifies optional extended and extendable precision formats, which provide greater precision than the basic formats. An extended precision format extends a basic format by using more precision and more exponent range. An extendable precision format allows the user to specify the precision and exponent range. An implementation may use whatever internal representation it chooses for such formats; all that needs to be defined are its parameters (

b,p, andemax). These parameters uniquely describe the set of finite numbers (combinations of sign, significand, and exponent for the given radix) that it can represent.The standard recommends that language standards provide a method of specifying

pandemaxfor each supported baseb. The standard recommends that language standards and implementations support an extended format which has a greater precision than the largest basic format supported for each radixb. For an extended format with a precision between two basic formats the exponent range must be as great as that of the next wider basic format. So for instance a 64-bit extended precision binary number must have an âemaxâ of at least 16383. The 80-bit extended format meets this requirement.

So, if you define a significand and an exponent very large, you have a de facto infinte precision.

Furthermore, Python floats does not adhere to IEEE 754. `decimal`

does.

The rules of arithmetic are completely different.

Franklin, itâs a float, the rules are the same as the other floats. I re-quote myself:

This integer is âboxedâ, or âproxiedâ inside a float (a new internal type, to not augment the size of ALL floats), so it will act as a float and it will have all the trouble of a float with operations.

PS: I canât post links againâŚ my posts are magically moved, I can sometimes post links, sometimes no, sometimes I canât post more than three posts in a thread, sometimes yes. This is a funny unpredictable forum X-D

This integer is âboxedâ, or âproxiedâ inside a float (a new internal type, to not augment the size of ALL floats), so it will act as a float and it will have all the trouble of a float with operations.

It is not only about the values that float can contain. Now define its behavior, how should +, -, /, *, and perhaps other operations like `sqrt`

behave.

So, if you define a significand and an exponent very large, you have a de facto infinte precision.

It just isnât. The equation 2x=2y has different sets of solutions, for example.

This integer is âboxedâ, or âproxiedâ inside a float (a new internal type, to not augment the size of ALL floats), so it will act as a float and it will have all the trouble of a float with operations.

This is supposed to support âthe rules are the sameâ? If the float transitions seamlessly from the double precision floating point binary, to the boxed integer it is already not behaving like a finite precision floating point. It doesnât serve me to do finite precision floating point arithmetic, unless I can switch off that behavior (like your quote from Wikipedia says âspecifying p and emaxâ).

The properties of all floating point arithmetic are not the same for different b, p, emax, and these are not the same for a floating point that can seamlessly transition between different values of them. For example, almost all algebraic equations, donât have the same sets of solutions.

You want to be able to have `True`

for `1e23 == 10 ** 23`

, but that carries with it consequences that affect more important properties to have than this cosmetic improvement. See for another example, that `1e5000`

, `1e5001`

or any other other exponent larger than the maximum exponent should be equal in finite precision. Would they be equal in your proposal? Again, I am OK, with a separate type in which they are different, or if I can switch off that behavior, but I need the type in which they are the same.

What benefit would arise from this new âfloatâ? What can I do with this 1e23 âfloat with integer precisionâ other than look at it and feel peaceful that it equals 10**23? What if I add an integer to it? Do I get back an int or a âfloatâ? What if I subtract `0.1`

from it? Will I get back the floating point value of `99999999999999991611392.0`

or something else? What would should I expect when I store the value 1e23 in a numpy array with dtype=double? Would numpy and others need to create a new dtype?

I donât think there are good answers to these questions, but am willing to consider your responses.

Python floats does not adhere to IEEE 754.

`decimal`

does.

This is incorrect.

Python floats are a thin wrapper around your platformâs C double, or equivalent, which for all major platforms, and most minor ones, implements IEEE-754. There many be some obscure chips that donât implement IEEE-754 but I donât know of any that support Python.

According to the `decimal`

documentation and the developers who made it, it implements IBMâs General Decimal Arithmetic Specification.

Please define âhybrid data structureâ.

Iâm sure you can guess the meaning. You know what a data structure is. You know what a hybrid is. Like a hybrid car, which uses both petrol and electric motors, and swaps between them as needed.

You want a class which uses both a 64-bit Binary64 float and an arbitrary sized int, and swaps between them as needed.

Steven D'Aprano:and to reimplement all the floating point functions to support this new hybrid.

Hey, youâre a bit drastic? X-D Have you understood the new object is only internal, it will be completely transparent for the end user, and itâs always a float?

Yes, I have understood, but I donât think *you* have understood. Programming is not magic. You canât just change the data structure and expect the implementations of functions to remain unchanged.

Every function in the `math`

module expects floats to be Binary64 floating point numbers, but under your scheme, sometimes they wonât be. What happens when you call `math.frexp(1e23)`

?

The frexp function is expecting a 64-bit data structure with an 11 bit exponent and a 52-bit (plus one implicit bit) significand. What to you expect it to do when it instead gets a 77 bit integer?

What happens when you pass 1e300 and frexp gets a 364 bit data structure instead of a 64 bit one? You canât just wave your hands and say âoh but it is only an internal changeâ. Internal changes still need internal implementation changes.

What happens when you call float methods like `float.hex()`

on one of your hybrid instances? Will `(1e23).hex()`

return â0x1.52d02c7e14af6p+76â (as it should for a float) or â0x152d02c7e14af6800000â (as it should for the exact integer)?

[quote

This is FUD X-D More complex than an entire library?

[/quote]

Hard to say. It may not be more complex than the entire `decimal`

library, but it will probably be more complex than the `Decimal`

type itself.

I usually prototype an idea before I post it to this forum. Not a polished implementation, but something that can validate my thinking and be critiqued by others.

Python floats are a thin wrapper around your platformâs C double

Well, finally youâve found the problem with my idea.

The problem is I thought that PyFloat was a reimplementation of `double`

, with a mantissa, an exponent and a sign, while itâs simply:

```
typedef struct {
PyObject_HEAD
double ob_fval;
} PyFloatObject;
```

So you found the right and polite objection to my proposal.

I think the only real way to have `1e23 == 10**23`

is that 1e23 will become an integer literal. This is probably the most simple and logical solution, but it will be never accepted, since

- Python should change the grammar
- there could be regressions
- itâs complicated to parse

The problem is I thought that PyFloat was a reimplementation of

`double`

, with a mantissa, an exponent and a sign, while itâs simply:`typedef struct { PyObject_HEAD double ob_fval; } PyFloatObject;`

So you found the right and polite objection to my proposal.

In other words, this entire thread could have been prevented if youâd just done your research before posting?