I don’t recall it either. I think if separate templates and values were the main API, it would be seen as unnecessary complexity for most use cases.
But for libraries that do a lot of work for a template (e.g. parsing HTML), it’s good to be able to cache that work. The current architecture makes that less simple, but still quite possible: extracting the values from one template and using them with the processed version of the other is quite doable.
This way we’re making simple things simple and hard things possible.
Agreed, and I think we could add an UnboundTemplate and UnboundInterpolation types later without having to implement the bound Template type in terms of them (providing a way of getting the unbound template corresponding to a bound template yes, actually implementing it that way, not necessarily).
That design process would be easier to do given concrete example of folks rolling their own unbound template equivalents based on the initial bound template design, so it still feels like a “later” problem to me.
Thinking of the “What does template equality mean?” problem that way is still pushing me towards favouring native hashing and equality being identity based, and offering some kind explicit “cache key” API that extracts the invariant parts of the template instance:
the strings tuple
the expr, conv, and format_spec parts of the interpolation fields tuple
Such a cache key API would presumably also be usable for the currently equality-based test cases that @dkp mentioned (for cases that care about field values, the string and interpolation field tuples can be compared directly).
It shouldn’t be too complex to move template.args[*].value into a dict keyed off template.args[*].expr, right? That should be enough for template to be static[1] and context to vary:
def lower_upper(template: Template, context: dict[str, Any]) -> str:
"""Render static parts lowercased and interpolations uppercased."""
parts: list[str] = []
for arg in template.args:
if isinstance(arg, Interpolation):
parts.append(str(context[arg.expr]).upper())
# previously: parts.append(str(arg.value).upper())
else:
parts.append(arg.lower())
return "".join(parts)
If we really wanted, we could add a for arg in template.with_context(context): that yields args with .value set, though I think given the alternating sequence complexity we’re not really making things worse.
As in, the identical template object is passed to the processor each time. ↩︎
One instinct that guided our current definition of equality: asserts with t-strings should generally run parallel to those with f-strings. But over time, I’ve decided that this instinct is bad/misleading. No matter howInterpolation.__eq__ is defined, we can concoct assert divergences and then scratch our heads about whether those divergences will be surprising to devs. Also, no single definition feels obviously “natural” and avoids potential for developer confusion (as in the current PEP, where Template.__hash__ is confusing and not useful as a cache key).
Best perhaps to treat Template and Interpolation as types that are quickly processed into something more semantically meaningful; developers can define equality, etc. on those derived types if they wish. And test code can make its own decisions.
I could see doing this, or just leaving it as a problem for another day: let devs decide if strings is sufficient for caching, or whether they need more, and provide a future API if it’s useful.
Ah, very good point. A third parallel tuple is getting too complex for my liking - I’d rather templates just have a .key argument at that point. I feel like treating the entire template object as a key will lead people to assume the substitutions are all the same and cache the result, not a partially processed string.