The upcoming template strings (PEP-750) would almost suit me well for one particular use-case of mine. However these t-strings process exceptions the same way as f-strings. I’d like to start a discussion if there are enough other use-cases where this is undesired.
and I want to suppress exceptions occuring during evaluation of possibly invalid data as much as possible. For example if period is zero or None, I still want to log the temperature. Currently I need one try-except block for each expression (+4 lines per logged value whenever evaluation could fail).
I haven’t tried (no access to Python 3.14), but I think the best I will be able to do (with eval and locals from caller’s stack frame) is:
log(t"frequency = {'1/period'}; temp = {'convert(temp)'}")
# note the expressions are now strings
What I would like to have is a template string variant that:
either does not evaluate anything, it just stores the expression as-is,
or it saves either a value (result) or the exception (error) occurred during evaluation.
I think you should write a custom logger, using the %s string syntax. And Python should not introduce such a potentially problematic and even dangerous feature.
In most cases the exceptions are actually desirable. For your use case you should use a function call with a try/except to handle them. Quietly ignoring exceptions at the language level is, at best problematic and more often extremely dangerous.
I was not proposing to quietly ignore exceptions. I was proposing storing them for later processing. Just like asyncio.gather with the return_exceptions=True parameter. And it should be the main feature of this new tool, not an obscure detail. The programmer could use it if he or she finds it useful for some minor task.
In my case the difference is this: I can have many repeated try-except blocks spreaded throughout the whole program OR I can have a single log function that will do the same but in one place.
You can just log period instead of 1/period, and add a little helper function to safely calculate 1/period.
It’s a huge task to define the interaction of these new strings, with existing exception mechanics, e.g. raise Exception(T"{raise Exception}"
You’re now saying after using a feature implemented with specific syntax, the programmer must remember to call something like asyncio.gather.
The spirit of “Errors should never pass silently, unless explicitly silenced.” does not mean a new single character should be added to the language’s syntax to do the silencing.
Error suppression with try/except is a huge source of gotchas and footguns in Python. It should be expensive (in terms of LOC), and coders should be forced to treat it properly, and give it thought. It is one of the very last features of the language that should be made easier with syntactic sugar.
I don’t understand some of your points in the context of my post. I do not recognize some parts of it as coming from my proposal.
All focus in replies received so far went to the alternative part of my proposal. However I wrote either do not evaluate (no exceptions at all this way) - or save the result/error without raising. (I mentioned asyncio.gather as an existing example for the latter approach.)
I will repeat the proposal (the first alternative only) on an example:
Let’s call the new feature “N-string” as a working name. This line:
N"this is an {example} of a template {tid}"
would return an object structured exactly like the Template as defined in PEP-750. This template would reference two objects very similar to the Interpolation type from PEP-750. The only difference is that interpolation.value attribute would be unused (not present or not filled in). The caller is now responsible to determine what values example and tid represent in this template. This is how it differs from t-strings.
This would be helpful for my use-case (which is really not important here) and maybe to some other use-cases as well.
If suppressing exceptions isn’t required, and is just a side effect of deferred evaluation, then that’s great. I can see the appeal of lazily evaluating the sub fields of an f-string or t-string on demand.
If they’re going to be stored as strings of the expression text anyway, why not use a normal string? Surely there’s a straightforward way of parsing the fields in balanced pairs of unescaped undoubled braces, e.g a regex full match?
No problem with an own custom parser. Long time ago I was once teaching an introductory course.
My point is that 99% of the required work will be present in the Python soon. I was hoping I’m not the only one who would welcome having a built-in reliable well-documented template string parser allowing to give the {placeholders} application specific meaning (within limits of the t-string grammar).
The missing 1% could be done - as a minimum required - by exposing a function parsing t-strings that stops just before the (last?) step called “Interpolation Expression Evaluation” (a term from PEP-750). There are of course alternatives and details to discuss.
The alternative with exceptions saved and not propagated got a negative feedback. Please let’s continue the discussion with template strings where {interpolation expressions} are not evaluated at all.
If they’re not being evaluated, is there any benefit to having a template string at all? All you’re getting is the text of it, so you can do all of the parsing yourself with no loss of functionality.
The fields would be subject to syntax parsing and code quality checks by the standard code tools.
More importantly, as they would evaluate fields lazily, similarly to generator expressions, I propose the new string type be prefixed by a g. So for example they’d look like: g"Possible error value: {1/0}".
Then Python would have r-strings, f-strings, t-strings, and …
That can be done with simple string literals too. Lots of tools can validate regular expressions, SQL queries, and other such minilanguages, despite them being string literals with no language-level adornment.
the t-string parser already does everything needed, it is (sorry, it will be soon) built-in and documented. Also it is not as simple as it might look or as an ad-hoc parser will probably be, e.g. I’d expect that t"Value: {value:.{precision}f}" will set the .format_spec attribute properly even if the .value will be skipped.
I’m trying to figure out if there is any demand for this feature. At least it is not among “rejected ideas” in the PEP.
It sounds to me like it’s something that would require a PEP of its own. So the “easy” way to find out if there’s demand is to write a pre-PEP, and put it up for discussion.
My personal feeling is that there probably isn’t a demand for the feature. But these days, I feel as though people tend to want to continue pushing for their ideas, even in the face of lukewarm, or even mildly negative, feedback. So if you want to continue anyway, I suggest you write up your idea in PEP form, for Python 3.15. I’ll likely be -1 on the proposal, maybe you’ll get some supporters, though.
Early PEP 750 discussion debated whether it should be possible for the expressions to remain unevaluated somehow. That idea was rejected with the conclusion being that lambda could be used for unevaluated expressions if needed:
I suspect that restriction is intended to avoid reader ambiguity (the error shows there is no parser ambiguity) for t'{lambda:format_spec}', and the PEP includes parens:
If a single interpolation is expensive to evaluate, it can be explicitly wrapped in a lambda in the template string literal:
Just to follow-up on the original goal, it seems like you could write a safe() function to wrap either individual lambdas or lambda-containing templates: