Something as compatibility breaking as replacing float with decimal required a demonstration of enough value to make it worthwhile
A chargpt supported tantrum is no such demonstration
As per zen of python practical beats pure there
Something as compatibility breaking as replacing float with decimal required a demonstration of enough value to make it worthwhile
A chargpt supported tantrum is no such demonstration
As per zen of python practical beats pure there
Do not worry about chatgpt. If that’s the thing you care about. The real question is do you care about future, or do you care about being infinitely stuck in old ways that are no longer relevant and probably should have never been relevant or as hardware progresses, these things shouldn’t be more than historical note of trying to squeeze something out of nothing.
The average code should be simple and straightforward. Floating Point most of the time confuse, deceive and allow for errors and mistakes. Do we want that? I do not want, I do not want the next robot to suddenly kick me in the leg or some radioactive disaster suddenly appear next to my door. The less specification papers to read and more intuitive it is, the better the quality and the less accidents.
And we do not want accidents, even if it’s Python, and as far as I know, Python is getting everywhere and it brings the floating points along with it.
Just because some gamer scientists wants to make his game or simulation faster for research, doesn’t mean we all have to suddenly ignore all the safety for all the general projects.
I don’t get it why is everyone so naive about it, it’s like every single programmer still living in their comfort 1990 and doesn’t care about nothing but their own thing… until something happens.
And that’s how you see headlines on the news “Data breach here, data breach there, malfunction there” all the time, everyday. What kind of insanity is this.
I don’t think floating points, the way most people use them, is confusing or unintuitive. I mean sure, there are some funny cases like 0.1 + 0.2 != 0.3 but in practice those cases have never mattered for the kind of computing I do. If you know you need something different, you’re likely a specialist who knows to avoid floating point and knows what library to use instead.
Okay. So, let’s assume you switch all your code over to be decimal.Decimal. Tell me, is this code safe, or do you have something that confuses, deceives, and allows for mistakes?
def find_square_root(n, epsilon):
last_guess = 1
while "not close enough":
next_guess = n / last_guess
avg = (last_guess + next_guess) / 2
if avg == last_guess or avg == next_guess: return avg
if abs(last_guess - next_guess) < epsilon: return avg
last_guess = avg
Is there any way that this could get stuck in an infinite loop? Mathematically, this should always be safe. The last guess and the next guess MUST surround the true square root, and the average of them MUST be between them. Are those two facts also guaranteed with decimal.Decimal?
It’s easy to want magic. But you won’t get true real number calculation with infinite precision; and even asking for arbitrary precision (like with Python’s integers), you’ll quickly run into problems where even the very simplest calculations take insanely long to perform.
Everyone loves to whine about IEEE binary floating-point. Getting ChatGPT to help you whine about it doesn’t change anything. Do some research into the alternatives, and maybe you’ll see that floats are actually the best choice for most applications.
Of course, you are welcome to use decimal.Decimal
in your code everywhere, if you prefer its tradeoffs.
Alright so we should just trust the human error.
Please, don’t edit your posts to add material like this. Make a separate post.
Can you explain please how you think that IEEE floating point causes data breaches?
This question sounds like we are waiting for it to happen, and no, I’m not here to talk about this specific case. I’m human being with time and limited energy and things like these seem to be obvious. Since you leave it all to the humans to deal with, and humans are not machines, it’s obvious that floating point can contribute to the result of data breach or anything where it is used without assistance or direct knowledge, which can happen to the best of us, the decimal is clearly more widely used across domains and is more trustworthy than any cheat/hack like floating point specific to computer science, which again shouldn’t even be here today in 2025 even if it gives you benefits of performance in the face of accuracy.
I’m pretty sure that if future people will be reading this thread, they will really have hard time believing that we used floating point for so many years as a standard already. It will boggle their mind on how we are allowing this to continue, and for how long we are stuck with this niche format that have many potentials to be misinterpreted and cause serious damages in the long run.
So what’s your idea? Do you want Python to stop using floating point? What would you replace it with? Give us something more than a rant about floating point.
Rants are important and most things I’m saying are already in decimal — Decimal fixed-point and floating-point arithmetic — Python 3.13.2 documentation
The point is, it is not recognized as part of the language.
It should be made as part of the language and and become the default, replacing the floating point, making it a legacy to be used for specific projects and operations, making standard projects more predictable, accurate and less error-prone. Yes everyone talk about legacy projects. But what about actual progress. Do we just sit here forever with broken systems created since some ~1990 or do we want things to actually be technically good and simple. I think we are entering point where legacy systems will become less and less relevant, since they are unmaintainble, and most importantly, most likely, very poorly done, even the “best” projects known today or popular ones like Linux Kernel, which is surely not going to last forever and it was never considered as serious project. I’m pretty sure that in 50 years, nobody will even know about Linux Kernel, and you are worrying about some Python legacy projects instead of progress that would help to improve things overall profoundly.
If we can’t do this (decimal) in Python, that just means this is where the Python ends and the good idea is to move to a language that actually cares about progress and future, safety. Leaving the legacy to the legacy people. Maybe that’s how languages become irrelevant.
After all Python is a tool used to implement other domains logic, and Floating-point just doesn’t translate that smoothly.
Then why did you say this:
Is this empty words or do you have a reason for specifically saying “data breach”? Justify your words.
Such that people don’t waste too much time. This type of comment is one of the tell tales of a crank. “Everyone is wrong” without any real argumentation.
Like it also is qualifying well studied algebraic structures, like finite precision floating point, with just adjectives, while ignoring that the problems are not only speed, but more fundamental, like decidability. Opinions, without knowing the theory first.
And then there is the messiah complex.
And there is non-messiah complex.
Good answer.
The underlying concern is that floating point as the default data type is an outdated practice that has persisted due to historical hardware limitations and that it should no longer be the standard, particularly in languages like Python, where readability and clarity are prioritized.
well - there may be a point here.
While changing the fundamental float as an wrapped ieee 754 number would break too much (to the point the first replies here, and even my first gut feeling, was just to dismiss this idea altogether) - maybe Python could leverage the fact floats are wrapped as objects, and provide some mechanisms to at least “diminishing strange rounding side effects”.
We have math.isclose
- but what if we could have a context for floating points which could, for example, make “0.1 + 0.2 == 0.3” be True out of the box?
That should not be hard to do, and could be done as a “__future__
” feature, and of course, be turned off. It would require all native code (or most) float operations to check if there is an active context before proceeding, but other than that, making the comparison operators make use of such a context wouldn’t be hard, nor have a significant impact on performance, given that float numeric calculation code in pure Python is a practice that should rather be delegated to specialized, extension code anyway (and which is commonly done).
A contextvar based sys.float_context
is a given to host such a context.
In my opinion, that is the exact opposite of what should happen. It is a good thing that people face the features of finite precision floating point, and be forced learn about them.
Features that hide it, should not be out of the box. They should be opted in, with understanding of what is being opted in.
You complain about floating-point, but
The decimal module provides […] floating-point
You complain about floating-point, but
The decimal module provides […] floating-point
The key issue isn’t that floating-point exists—it’s that it’s the default in general-purpose programming when it often shouldn’t be.
The decimal
module does provide a floating-point-like system, but it prioritizes exactness over raw speed. Unlike IEEE 754 binary floating-point, decimal
avoids common pitfalls.
The problem isn’t the concept of floating-point itself, but rather the binary floating-point implementation (IEEE 754) being the standard choice everywhere—even in places where accuracy should matter more than performance.
It’s not a correct way to start from an incorrect state in development. The everyday code transfers to the not-so-everyday code. Do you want to start on unstable ground or stable ground when learning or developing? If Python and JavaScript had made decimal
the default instead of binary floats, a lot of subtle bugs and unexpected behaviors in everyday code could have been avoided. The fact that you have to explicitly import decimal
is proof that the language designers treated correctness as an optional concern, when it should have been the baseline.
That is a good opinion - but I hope you realize it is not what Python is, or at least, used to be, about. Having a friendly language and environment to allow a larger audience to code creating abstractions which conform to the way humans express their thinking and ideas is what made Python what it is today.
Hurting people with floating point idiosyncrasies because “it is life” is no better than doing the same, with, say, text encoding - which, I may remind you, was so meaningful to Python as to be one of the main reasons the breaking changes for Python 3.0 where allowed to be breaking.
And it is not that people are not hurt by encoding errors nowadays - but life sure got a lot simpler when dealing with text.
Nonetheless, the idea of having a floating context does not preclude it being “opt in” - in my post i even mention that it should first come as a __future__
import at first.
It’s always a “cheat” when you wish to represent irrationals by some number system.
Precision will be lost. People over time have taken many things into account and found floating point to be useful; arguably more useful than Decimal - which remember cannot represent irrationals either.
There is information in the pics you quote were the person explains that floating point is a tradeoff between speed and precision. Decimal is also a tradeoff between speed and precision and I think hardware vendors found floating point - based on binary, is “better”.
First of all, I appreciate that you’re trying to turn a clearly unproductive thread into a productive one.
I think making people deal with float arithmetic is very much like Python 2 encoding issues. That’s an apt comparison.
But also note that to change that, a whole new paradigm of interaction with strings and bytes had to be setup. I don’t feel convinced that a simple future import gives programmers enough clarity about what’s going on.
So I’m more of the mind that the harsh reality should be exposed, at least for now, until or unless someone can come up with a new way of thinking about 0.1 + 0.2 == 0.3
such that it’s always clear what kind of numeric is being used (no context managers or anything else that could have action at a distance).
Maybe some kind of numeric literal designator, like string prefixes? It might not be beautiful, but I’d probably use Decimal more if I could get at it by writing d0.1 + d0.2 == d0.3
.
(But of course, this doesn’t work, since d1
is a valid identifier.)