Yes, that’s why I pointed out that the section should specify how this composes with PEP 705, since logically ReadOnly values should still follow the rules outlined by PEP 705.
This does make the mixed case a little bit more complex, but it still should be possible to come up with a rule, like if all your ReadOnly keys’ value types are a superclass of your regular keys and all your regular keys share a uniform value type, then it is compatible with dict[str, VT] where VT satisfies that condition, but not bidirectionally, just as the uniform PEP 705 case wouldn’t be bidirectionally compatible unless the value types match exactly on both sides.
I think this logic implies the following code should also be legal, since the default type for extra keys is ReadOnly[object]:
class StringKeyedDict(TypedDict):
pass
def takes_params(params: StringKeyedDict) -> None: ...
params: {'foo': 'bar'} # inferred type is dict[str, str]
takes_params(params) # this should work!
This would actually simplify code at my company, where currently we have to union this type with dict[str, Any] to express the same thing.
(You might ask, why not use Mapping? It’s to prevent users passing in something that’s not a dict, which the underlying C library rejects for performance reasons. Subclasses of dict would also be rejected but we can’t easily exclude those.)
The discussion here has died down for a few weeks. Unless new feedback comes up, I think it makes sense to submit the PEP to the Typing Council soon, which should give us enough time to get it implemented in CPython in time for Python 3.13.
I’m not sure if this ship has sailed, but I wasn’t able to deduce from the thread why the special key name was changed from __extra__ to __extra_items__. The former seems easier to type and there’s no chance of a collision with a regular key called __extra__ because that key would only be recognized as special when opting in to closed=True.
As a separate item, I believe the Reference Implementation section of the spec can now be updated to say that implementations exist in pyright 1.1.352 and pyanalyze 0.12.0. The inclusion of (presumably successful) implementations in the PEP would make it a stronger candidate to accept.
The idea of using __extra_items__ is for educational purposes given that name collision is already unlikely after we introduced closed=True to the proposal. This is mentioned in the “How to Teach This” section but not in the updates reply you linked.
Updated the proposal to mention the reference implementations. Thanks!
Can we update “The author of this PEP thinks that it is slightly less favorable than the current proposal” in rejected alternatives to summarise the later discussion? I propose something like “Several members of the type checking community felt adding another place where types are passed as values instead of annotations would be highly undesirable. It could hamper any potential future effort to fix existing instances.” (Could say “authors of typecheckers” instead to highlight their expertise.) I think it may be useful to later readers to keep that context.
I had trouble understanding the referenced PEP section [1], however if I understand correctly, the spelling __extra_items__ is believed will be easier for new users of the feature to understand than __extra__. (Please correct me if I’m mistaken.)
Aesthetically, I don’t like that this PEP introduces something that has the appearance of a bodge (an __extra_items__ key), even if there is almost no chance of it ever clashing in practice.
As others have mentioned, I didn’t love the reserved key way of spelling this. I also think the fact that __extra_items__ silently becomes a regular key if you don’t specify closed=True feels like something that is easy to mess up (and that type checkers wouldn’t be able to help you with).
One suggestion that could solve this (at least for class syntax) is by adding a dummy __getitem__ to the TypedDict definition:
As desired by the PEP, this continues to keep the extra items type in an annotation context. This spelling maybe makes it less of a new concept. The __getitem__ method must be stubbed out and is not called at runtime.
A linter could though, it has occurred to me – you can always explicitly suppress a linter, and doing so would double as a highlight that the line of code is weird.
To me, that looks like it should apply to all keys, not just as an overload for any key not otherwise specified.
Yes, but I’d still prefer to not specify behaviour that relies on linting and lint suppression for usability, where possible
To me, that looks like it should apply to all keys, not just as an overload for any key not otherwise specified.
My thinking for the __getitem__ pun was an analogy between explicit attributes and __getattr__, where actual instance attributes take precedence. Besides, applying to all keys doesn’t really make sense, so hopefully users wouldn’t expect that.
Using a dummy __getitem__ declaration is an interesting idea. I kind of like it, but it still has some problems. It reduces the likelihood that someone uses __extra_items__ without marking the TypedDict as closed. It avoids the namespace collision that is created by __extra_items__, but it creates a new namespace collision issue with __getitem__, so it doesn’t really eliminate that problem. It also wouldn’t work with the functional form of TypedDict.
If we stick with __extra_items__ and we’re concerned about it being silently being interpreted as a regular key, type checkers could emit a warning if __extra_items__ is used in a non-closed TypedDict. It’s extremely unlikely that this is intentional, and in the rare case where it is intended, a # type: ignore could suppress the warning.
It doesn’t have to create a namespace collision issue with __getitem__… Even at runtime:
In [1]: class X:
...: __getitem__: str
...: def __getitem__(self, key: str) -> bytes: ...
In [2]: X.__annotations__
Out[2]: {'__getitem__': str}
In fact, the runtime behaviour (for runtime type checkers) with a method is clearer and simpler than if it were a reserved key. (And I think TypedDicts are a place where runtime type checking is disproportionately useful)
If runtime checking is a concern, we can make an explicit closed=False required if the __extra_items__ is expected to be a normal key, by using a sentinel value as the default of closed on TypedDicts (currently closed defaults to False).
Overriding __getitem__ would map to a ReadOnly __extra_items__; I assume you’d also need to override __setitem__ if you wanted the current behaviour in the PEP for a non-ReadOnly declaration?
That, plus the fact that it behaves more like __getattr__ than __getitem__, makes me dislike this proposal. Maybe we could leverage __missing__ instead? It’s yet again not quite the same, but it’s closer to the original meaning than __getitem__ and it would compose with ReadOnly more naturally.
Or we just go back to __extra__ or _, but make it a method, to avoid any cognitive dissonance with existing runtime features, although I’m not super happy about leveraging methods, since it doesn’t work with the functional syntax[1].
which you sometimes need to use if one of the keys is a Python keyword ↩︎
Unfortunately you can’t cover absolutely every case. You can only use the functional syntax with TypedDict itself, not with any of its subclasses. So you can’t add a keyword key after the fact, but you also can’t close a TypedDict that’s open or change the extra keys type, since it defaults to Never. So you can’t have a closed TypedDict with reserved keys unless it’s specifically the extra keys being Never case.
Actually, maybe closing an open TypedDict is sound, but I’d have to think about it.
Regardless of soundness, it just feels gross for each syntax to have its own non-overlapping singularities, especially if one of those singularities is almost an entire feature[1]. At that point I’d actually prefer passing a type parameter, even if that comes with its own issues.
save for closed TypedDict with Never as extra items ↩︎