@hwelch I agree! If it was possible to unpack into a Literal (which IS supported at runtime but is not supported by static type checkers as far as I can tell) then that goes a long way towards solving the single-source of truth issue and makes it so there are soutions which don’t involve typing the same string literals many times.
We can do the same for Literal though, no?
Like when we unpack a enum into a Literal, we can check if the given values are valid?!
Also, I suppose your idea with unpacking iterables works for any kind of such (except those yielding multiple values at once, e.g. mappings like dict with k-v), as long as they are marked as immutable / Final?
(Even if they were to be mutated though, it would be logical to just use the state they were in at time of creation of the Literal, right?)
Technically a mapping can be unpacked into a Literal at runtime:
>>> Literal[*{'a': 1, 'b': 2}]
typing.Literal['a', 'b']
Since anything that implements __iter__ can be unpacked. So I don’t see why you shouldn’t be able to unpack those as well
MethodMap: Final = {
'GET': 1,
'POST': 2,
'PATCH': 3,
'DELETE': 4,
}
MethodNames = Literal[*MethodMap]
If unpacking a Final were to be supported by type checkers, I think the logic should follow the unpacking rules that Literal uses at runtime.
The type checker can’t really do anything about someone modifying values at runtime. The Final would just be a way to enforce your intent that the values shouldn’t change. Or if they do change, those extra runtime values are not properly supported.
Since the Literal will be initialized with the hardcoded values, it could also be used to assert that the runtime values haven’t changed:
def sanity_check():
assert Methods == Method.__args__, 'Methods changed at runtime!'
Yes, but I think the same confusion as with enums might occur, as it is still not intuitive how unpacking works, if you are using it for the first time. Apart from that, when we allow dictionaries, people will ask for support of TypedDict, Dataclass and similar, which I could imagine to be even more confusing, and harder to implement for checkers.
Yeah, I was just thinking about that idea a bit more and it quickly runs up against the KeyName / KeyType issue that’s currently being discussed. If that is ever worked out, I think something like this could be more reasonable, but without that it’s got too many cases that cause confusion.
For example here’s one that technically works with Literal at runtime, but is nonsense when static type checking:
class A:
def __hash__(self):
return 1
nonsense = Literal[*{A(): 'a1', A(): 'a2'}]
Is the namespacing really such an important feature for people? It’s not that I want to convince you otherwise but I’m just surprised. The original post had this example:
handle("GET") # accepted
handle(HttpMethod.GET) # accepted
but in my personal experience I’ve only either wanted to use string literals (in which case I’d use Literal) or I wanted to use scoped constants (in which case I’d use StrEnum) but never really both at the same time…
@tmk Great question! I’m eager to here what others have to say, but just to explain the constraint I am running into:
The core issue for me is that there are real cases where I want both:
- acceptance of the raw literal value (e.g.
"GET"), and - a single, discoverable namespace that defines the allowed set.
Namespacing matters to me primarily as a single source of truth:
- refactors and renames are safer
- typos are harder
- IDEs can jump to a definition that carries documentation and relevant source code
- OpenAPI schemas and similar can reuse the same documentation and references for many spots
- it’s easier to audit and reason about where a value set comes from
At the same time, I don’t want to reject raw literals or require StrEnum, because that forces the type hints to lie in common situations:
- Serialization boundaries: after JSON (or similar), values come back as raw strings. Requiring
StrEnummeans either a) eagerly casting everywhere, b) pretending the value is still an enum when it isn’t, or c) using union typehints of the StrEnum combined with a Literal type - Soundness: if a variable is typed as
StrEnum, people will naturally dox.valueorisinstance(x, StrEnum), which breaks immediately if the value is actually just a string. - Ergonomics: forcing explicit casts everywhere feels like overhead that exists only to satisfy the type system.
So the goal isn’t “namespace instead of literals” or “literals instead of enums”, but a way to say:
“These specific literal values are valid, and this object is the authoritative namespace for them — without requiring the runtime value to be an enum instance.”
That combination is what I currently can’t express in Python typing, even with stubs.
This is important for backward compatibility:
# library version 1.0 (untyped)
def fn(arg):
"""arg can be one of 'foo', 'bar' or 'baz'."""
# library version 1.1 (hinted with literals)
def fn(arg: Literal['foo', 'bar', 'baz']) -> None:
"""arg can be one of 'foo', 'bar' or 'baz'."""
# library version 1.2 (use LiteralEnum for all its advantages)
class FnOption(LiteralEnum):
foo = "foo"
bar = "bar"
baz = "baz"
def fn(arg: FnOption) -> None:
"""arg can be one of 'foo', 'bar' or 'baz'."""
older versions accepted fn("foo"), if newever versions only accept fn(FnOption.foo), that breaks compatibility.
Three more aspect to think about, and maybe mention in your proposal (though they may be out of scope)
1. Interaction with dynamic strings
Basically, sometimes I want the following behavior, if I expect users to feed deserialized JSON, or otherwise dynamically generated data.
def fn(arg: Option | StrNotLiteral) -> None: ...
fn("valid_option") # ✅️
fn("invalid_option") # ❌️
fn(json.loads()["fn_arg"]) # ✅️
But StrNotLiteral does not exist in the type system. Maybe this usage could be supported by LiteralEnum somehow, though possibly the correct solution to this would be overloads like
@overload
def fn(arg: Option) -> None: ...
@overload
@typing.error # decorator that has been suggested in other discussions
def fn(arg: LiteralString) -> None: ...
@overload
def fn(arg: str) -> None: ...
2. Interaction with constrained TypeVars.
Sometimes you have a class whose behavior, e.g. what it returns on some method, depends on an option given at initialization. This case can be modelled by generic type, constrained to a set of literals:
class A[T: (Literal["foo"], Literal["bar"])]:
def __init__(self, option: T) -> None:
@overload
def f(self: A[Literal["foo"]]) -> something: ...
@overload
def f(self: A[Literal["bar"]]) -> something_else: ...
As is, we could take your LiteralEnum and fill the gaps there. But it would be way nicer, if parametrizing by the enum directly + exhaustiveness checks worked (again possibly out-of-scope, but in any case may be worth mentioning as a future extension):
class Option(LiteralEnum):
foo = "foo"
bar = "bar"
baz = "baz"
class A[T: *Option]: # new syntax for constrained typevar
@overload
def f(self: A[Option.foo]) -> something: ...
@overload
def f(self: A[Option.bar]) -> something_else: ...
# exhaustiveness check:
# ❌️: no matching overload found for A[Option.baz]
3. Interaction with TypeVarTuple
Since LiteralEnum allows iteration, and its members are all statically known,
we should be allowed to use typing.Unpack on it, e.g.:
class Option(LiteralEnum):
foo = "foo"
bar = "bar"
type as_tuple = tuple[*Option] # tuple[Literal["foo"], Literal["bar"]]
I’m not sure where this could be useful yet, but it seems like natural/expected behavior.