dedent and emulating f-string is separated task.
dedent(Template) should return Template.
For example:
# without dedent
cursor.execute(t"""
SELECT *
FROM spam
WHERE id = {id}
""")
# with dedent
cursor.execute(dedent(t"""
SELECT *
FROM spam
WHERE id = {id}
"""))
Maybe, but I think it’s equally reasonable to suggest that putting a template into “text wrapping” turns it into text. One option is a new textwrap class and set of functions: TemplateWrapper (to match TextWrapper), which would produce templates as its output.
Getting into textwrap design could take us off topic, but I think the point that you can implement more sophisticated wrapping with t-strings is relevant.
In addition, though, whatever rule we decide on should initially be implemented either as a function in the textwrap module, or as a str.dedent method. That is sufficient for every use case apart from templating, and it’s much easier to change the semantics of a function/method than to refefine the meaning of core syntax. So I’d argue for introducing the functional form first, and using feedback from that to inform the final definition of how d-strings would work. It also allows us to stop before introducing a new string type if it turns out that the functional/method form turns out to be enough. This does mean that there would be at least one release worth of delay before the d-string form is available, but IMO it’s better to have a delay than to implement the wrong semantics.
It is not, and this has been discussed multiple times already. There exists edge cases that cannot be correctly dealt with by functions that act on a string value instead of an a syntax level. You can say these edge cases are irrelevant, but please don’t just say they don’t exists.
Any that don’t involve interpolating a multi-line string into an f-string that you want to dedent? Those were the ones I was referring to when I said “apart from templating”. I’m genuinely sorry if I missed such cases, but my recollection is that the f-string/t-string cases were the only ones that need syntax support.
There’s also the the ability to use \ to wrap long strings (I guess you might call it a happy side effect rather than a use case). My favourite example of that would be this one.
def raw_dedent(text):
lines = text.split('\n')
non_blank_lines = [l for l in lines if (l2 := l.removesuffix("\\")) and not l2.isspace()]
l1 = min(non_blank_lines, default='')
l2 = max(non_blank_lines, default='')
margin = 0
for margin, c in enumerate(l1):
if c != l2[margin] or c not in ' \t':
break
text = '\n'.join([l[margin:] if (l2 := l.removesuffix("\\")) and not l2.isspace() else l.lstrip() for l in lines])
return text.encode().decode('unicode_escape')
print(raw_dedent(r"""\
Some explanation that is longer than the ~60 characters of width \
I normally have left at this point because I'm already a few \
levels of indentation deep. Blah de blah de blah.\
"""))
Ouput:
Some explanation that is longer than the ~60 characters of width I normally have left at this point because I'm already a few levels of indentation deep. Blah de blah de blah.
Ah, OK. The link to the Rust discussion suggested to me that we were thinking of using different rules than the ones in textwrap.dedent. My apologies for misunderstanding.
OK. So my point is that any changed semantics that can be implemented in a function should be introduced in textwrap.dedentbefore we add new syntax. That way, we have a chance to see the real world impact of a change to the dedent rules before committing to a syntax change.
OTOH, I believe few people use dedent, compared to the number of people which could make use of any new direct syntax for that.
dedent has been around for decades, it if really hard to say that having a “special trial period on Python 3.15” would give us new insights. That is: updating dedent with a few options over one year forward should not change much the usage figure.
For one thing, after a few months of working t-strings is something that might give us some new insights.
The proposed syntax is a nice to have, and we’d have one year now to figure out the best default semantics for it - which could simply do what dedent currently does. And then maybe let some space for future tweaking of the syntax itself - like having a special character sequence following the quotes which could change the dedenting behavior: which behavior we can determine later, but a “escaping, behavior changing” first character would have to be defined along the first iteration of a new syntax.
In short: waiting Python 3.15 to add a few options to dedent, and then define the behavior of a d prefix is too little. We already have dedent as a function, this is just the same as rejecting the proposal altogether - even though this thread had clearly shown it would be handy for lots of people.
That is not backward-compatible, so it still carries the same problems as the Swift/C# style strings.
The syntax also introduces poor readability in my opinion, contrary to what the current function is generally used for, with a few levels of indentation the closing quotes are quite a distance away.
I don’t know if it was already said (I don’t know how can you search for a word in a single topic): this is the default in Java now. But as @Stanfromireland said, it’s backward-incompatible.
It can be a good add to an hypotetical d-string:
assert d"""
hello
my
darling
""" == """
hello
my
darling
"""
assert d"""
hello
my
darling
""" == """
hello
my
darling
"""
It’s really convenient as a syntactic sugar. My only fear is that it will be too much obscure, so anti-Pythonic.
“reposting this” to check if we can get this rolling again:
Even just having `d””” …. “““ ` be the exact equivalent of `textwrap.dedent(“““…’””)` as it is today would be a large gain, IMHO.
There are a few other extra ideas here- but these found some resistance - I don’t see anyone against just doing the plain transform.
(maybe it is worth mentioning, again, that the auto-dedenting for doc-strings (where it did break some doctests around, for sure) went in effect smoothly and was already a win)
Speaking as a proponent of str.dedent(), rather than d-strings, I support this moving forward as a PEP.
I’d use either option if it existed, probably a good amount.
In that light, I only see upsides to this proposal getting submitted. If the SC looks at the views expressed and accepts it, that charts one course (I don’t think it’s the best choice, personally, but I still think it’s an improvement). If they defer or reject, I think that strengthens the case for str.dedent() and I’d be interested in helping with that as an alternative/follow up. None of these outcomes seem bad to me.
I’m not against, but I consider the d-string superior, for a simple reason: IMO Python should have that by default. And there’s more.
My idea is not a new idea. I discovered this post:
I quote some posts, that add really strong points:
Really cool indeed. I love it.
Don’t know if it’s a good idea to dedent before f-strings are applied. What if the f-string adds newlines and indentations?
For the rest, I completely agree.
This is exactly my main point. Having to write d""" will be boring, since the best was it did it by default. But the ship is sailed.
On the contrary, having to write:
"""
hello
my
friend
""".dedent()
it’s not only more boring, but absolutely less powerful, per before quotes.
At this point, instead of having yet-another-method similar to textwrap.dedent(), I prefer to have nothing. We can instead improve textwrap.dedent() with an optional kwarg.