For applications that require high numeric precision, having to write every constant number as an explicit decimal.Decimal or fractions.Fraction instance such as Decimal('1.2') makes the code ugly to read.
Since Python supports the relatively esoteric literal of a complex number 1+2j as opposed to requiring the user to write complex(1, 2), I think it makes sense to add support for decimal and fraction literals such as 1.2d and 1/3f as well.
Thatâs âconstant foldingâ, and itâs the same thing that lets us have tuple constants and frozenset constants too, even though we donât (strictly speaking) have literals for them. But arguably, 2j is already a complex literal - thereâs no separate imaginary type, so the only distinction is that a complex literal always has a real part of zero. As Stefan said though, thatâs really just a nitpick and arguing about terminology.
So, should we have 1.2d as a literal form? Hereâs one small problem: Contexts. You can set and get context objects that change precision and other attributes. Should they affect Decimal literals?
I think a Fraction literal would be safer, but on the other hand, not nearly as useful. You could do something like this:
f1 = fractions.Fraction(1)
# ...
one_seventh = f1/7
two_sevenths = f1 * 2/7
which looks almost like a literal, just with the f on the other end. Itâs a little less clear for anything thatâs not an integerâs reciprocal, and it wonât improve the repr, but it is an option.
Both Decimal and Fraction types are part of the stdlib, but not the core language. Probably, thatâs a reason why there is a special literal for imaginary numbers, but not for Fractions.
Personally, I would appreciate a better support for fractions in the core language, perhaps with a some interpreter option to override odd integer division behaviour (return a Fraction, instead of a float). In this case you wonât need special literals for fractions.
But right now, as far as these types just in the stdlib - I donât think that special literals do make sense.
This just a little more verbose:) On another hand, probably this makes code more obvious for novices. More or less same argument was proposed by Tim Peters against hexadecimal floating point literals (vs current float.fromhex() class method).
This might be more boring while interactive work. But you can easily add simple AST-transformations to rewrite code on the fly. Here is mpmath example:
Very good point. So a decimal literal canât be stored as a constant, and has to be evaluated at runtime at the very spot itâs written so that it can be affected by the context just like writing Decimal('1.2') would.
Thatâs definitely a viable workaround, although writing f1 * 2/7 still looks rather noisy to me with an additional multiplication compared to a hypothetical 2/7f.
Compared to 1+2j, which is already a recognized representation of a complex number outside of Python, the 1.2d and 1/3f literal formats are going to have to be learned by Python novices instead.
But compared to your proposed hexadecimal floating point literals such as +0X1.921F9F01B866EP+1, a decimal or a fraction literal like 1.2d and 1/3f look a lot less cryptic once they are learned because most people are already familiar with 1.2 and 1/3 and the only thing to learn is the suffix, just like learning the f prefix of an f-stringâyes itâs something to have to learn, but it feels intuitive once learned.
Those are cool modules indeed and should indeed satisfy most use cases. Thanks!
The main downside is that AST/source/code object alterations can only be executed as an imported module with an import hook. The main program canât be transformed this way.
Indeed the proposed tag string syntax would be a decent alternative to a dedicated decimal/fraction literal. My only reservation is that the syntax highlighters would likely color them as strings rather than numbers unless the D tag comes directly from the standard library, in which case we might as well have a dedicated decimal literal.
By âoutâ do you mean âout of the questionâ? Can you point me to where it says so?
The construction of a decimal like Decimal('1.2') is not affected by the context. Decimals are always constructed exactly. However + and - are context dependent e.g. +Decimal('1.2') and -Decimal('1.2') will trigger context rounding. The question then is whether -1.2d should be treated as a single literal or as unary - applied to the literal 1.2d i.e. Decimal('-1.2') or -Decimal('1.2').
Ah I see. Thanks for the correction. This makes decimal literals eligible for constant folding then.
Great point. I believe treating -1.2d as a single literal as in `Decimal(â-1.2â) would make the most sense in most use cases, but itâs definitely something that needs extra clarification in the documentation if the feature is to be implemented.
And âno more âtagâ Instead, a single t prefix which passes a Template to your functionâ I understand to mean D"1.23" would now have to be D(t"1.23") in that approach. Maybe I misunderstood.
Thanks. I think you understood it correctly. The revision kills the main draw of the tag string proposal IMHO and a syntax like D(t"1.23") can be already achieved by aliasing Decimal as D, but without an official decimal literal the syntax highlighters will not treat D("1.23") as a number.
But the proposal at the time included âmake Decimal a built inâ. The main objection to simply adding a decimal literal is that the core interpreter doesnât know about the decimal type.
I make a habit of never assuming âit wonât be too hardâ unless Iâve done the research and know Iâd be capable of doing the work myself if needed.
To put it another way, I donât know, but Iâd assume it would be significantly harder than you seem to expect. Certainly hard enough that the benefit would have to be a lot more than the difference between D("1.2") and 1.2d.