I moved this topic to Python Help because it is not about enhancing Python. There is nothing here that could make Python better. The only problem is educational. The OP should learn about floating point numbers, and how they are different from integer numbers, numbers with finite decimal representations, and mathematical real numbers. I encourage other users not to write comments that do not have an educational purpose.
Sorry for not clarifying this straight away. I was exhausted.
What I simply suggested is to create a “boxed” integer instead of a float for big floats that can be easily recognized by the parser as integers, like 1e23 or 100000000000000000000000.0. This integer is “boxed”, or “proxied” inside a float (a new internal type, to not augment the size of ALL floats), so it will act as a float and it will have all the trouble of a float with operations. So this is NOT decimal.
This way, 1e23 will remain a float, but internally it will be represented by a PyLong.
About my dignity
Well, I wanted to be a little ironic, but it seems it’s more appropriate a serious discussion.
This is an harsh statement without any explaining. And personally I feel it offending, if I can say.
Since any discussion with Serhiy seems useless, can someone explain me why my idea is so bad?
Not a core dev, but I think adding a brand new literal type with unfamiliar semantics should in general face an uphill climb. It’s added complexity to reason about the behavior of programs (1e23 looks like a float, but acts like an int sometimes) and has the potential to change the behavior of existing programs. If I have an application where I want to analyze floating point error and not simply live with it, a new type whose behavior is determined by context would be an unwelcome addition. (If that description is incorrect, consider that I have read two threads and that’s the best I could make of it.)
Floats and ints are broadly understood, and there are perfectly good ways to get integer values of 1e23 that will not surprise anybody who is used to working with floats and ints.
Both this thread and the original seemed to have a hypothetical scientist who needs to be protected from learning floating point error as the motivating use case, and personally I don’t find it compelling. I think education about how to construct large integers would be more valuable.
On a meta-level, I suspect that people are annoyed at how the original thread kept going after it became clear that there is no appetite among core devs to change the language in such a significant way, and now you have started another thread on the same topic. I don’t know about others, but I would hope that an Ideas thread that keeps getting longer would be a sign of broad interest and increasing precision about what the proposal would entail. Instead, the thread was driven by a couple people repeating arguments with perhaps minor variation. I can see how moderators would be frustrated with this situation.
We have already spent days of elapsed time and many man-hours of effort explaining why this suggestion is not workable. We’ve given you an alternative which will do what you want and exists today, but you refuse to use it. (Decimal.)
You have dismissed Decimal because you think it is too complex. To get the result you want from floats, we would need a hybrid data structure and to reimplement all the floating point functions to support this new hybrid. This will surely be more complex than Decimal, much more likely to contain bugs, and with the severe risk of surprising corner cases where this hybrid numeric type behaves in ways even more surprising than floats.
If you disagree, if you think it is so easy, go ahead and prove us wrong by implementing it. Just don’t expect other people to do it for you when they have no interest in this proposal, are sure it will be hard to do right, won’t do what you expect, and there already exists a solution that solves your problem better.
I have not suggested to add a new literal type. Read below.
It’s a float. Where is it unfamiliar? Read below.
You have not understood the proposal. I’m proposing a float, that acts as a float, but has infinite precision because his value internally is an PyLong. Read below.
This is true, it can improve them, since 1e23 == 10**23 and not 1e23 == 99999999999999991611392. Read below.
I think you completely misunderstood my proposal. There’s no context and there’s no new type. I’m proposing an internal structure, not exposed, that will be created for float literals that are big integers. I quote myself, please read carefully:
If so, you should program in C, not in Python.
Well, if so, they could simply ignore the thread, not move to Help.
Because I have the ideas more clear, but evidently no one has even read seriously my post.
Yes, if it remains in Idea section and it’s not moved to Help X-D
A new type, separate from float with that property, sure, decimal, mpmath or anyone’s own new class can do that. But having float matching the behavior of finite precision floating point arithmetic is useful. Two ways that come to mind are for interoperability with other languages, or when using Python for testing or reproducing computations done by software in other languages.
The “surprise” of seeing that 1e23 == 10 ** 23 is False can be solved by simply learning that 1e23 is a float, 10 and 23 are int and that ** on int outputs int.
That’s what I understood. And that is why I mentioned a pair of reasons why having float matching C++'s double is useful. In my opinion, more useful than not being surprised by 1e23 == 10 ** 23 being false.
I know some, not all, of the differences. The objection resides on the ways that they are the same, the finite precision, the adherence to IEEE 754 to whichever extend they do.
And it doesn’t matter which other language. I mentioned C++'s double and Python float because that is what I am using right now and exactly for the purpose I mentioned.
Having unlimited precision, is a fundamental change on the arithmetic. I wouldn’t be able to use float for my purposes unless I have a way to switch it off. Gaining 1e23 == 10 ** 23 doesn’t seem to be worth it.
The rules of arithmetic are completely different. For example, one would get back all sorts of properties like (a + b) + c = a + (b + c) that float don’t have. Having or not this property is not good or bad on its own, of course. It depends on what you want to do. What would be bad, at least for me, is not having the type that behaves as finite precision does.
So, if you define a significand and an exponent very large, you have a de facto infinte precision.
Furthermore, Python floats does not adhere to IEEE 754. decimal does.
Franklin, it’s a float, the rules are the same as the other floats. I re-quote myself:
PS: I can’t post links again… my posts are magically moved, I can sometimes post links, sometimes no, sometimes I can’t post more than three posts in a thread, sometimes yes. This is a funny unpredictable forum X-D