We may need better specification for existing and future refinement types in the type system

Right now, there are a few type forms in the type system that act as refinement types. This is not a term that has been formally introduced to the python type system.

Refinement types are types that are constrained by their value in some manner. The examples I’m aware of of these in the type system at this point are Literal and LiteralStr. Neither of these actually specify a different type, but type checkers currently attempt to use solely subtyping to handle these. Both of these specify properties of the value (that it is a literal, or a specific literal, or in the case of LiteralStr that it was constructed only of other string literals) This causes problems when interacting with variance in situations where only the type is what is cared about (See: Definition of `str.split`, `str.rsplit` and `str.splitlines` results in false positive errors · Issue #10887 · python/typeshed · GitHub)

I think it’s worth adding some language to better support this (And other potential future cases, like a potential type u8 = int v where v >= 0 and v <= 255, which it appears certain libraries like pydantic are trying to force Annotated to do the work of)

While the potential future case definitely requires entirely separate consideration (and is probably only reasonable to allow basic comparisons to statically knowable values), there is the case of what is already currently in the type system which I’d like to take a moment to improve how type checkers handle.

The relatively simple language here would be “When something only cares about the type, not the value constraints, type checkers may allow the use of something with the correct type that has its value constrained”

The use of may there is intentional.

It’s obvious that

x: list[str] = []
...
x.extend("This that".split())

should be fine even though “This that” here is a LiteralStr and not a str, all LiteralStr are exactly of a type string, but have constraints on how the string was sourced, similarly, it should be fine to drop the constriant implicitly at sole assignment:

s: list[str] = "this that".split()  # no longer list[LiteralStr]

However, the use of may allows denying some more dubious cases that have type safety issues, for example:

def x(vals: list[list[str]]):
    for l in vals:
        l.append(input())

def y(val: list[LiteralStr]):
    x([val])  # this should error, type safety issue above, the caller of y got a non-literal added to their list

This can be done slightly more intelligently by a type checker that maintains a directed graph in memory to track type var use, or which understands which type forms involve mutability, but with the current allowed things in Literal and LiteralStr, it is trivially safe to drop value based constraints when the substitution only effects the inner most type. The dubious case above involves not only substitution of LiteralStr for str, but of list[LiteralStr] for list[str] in a way that there could still be references expecting list[LiteralStr]

1 Like

It seems true that

x: list[str] = []
...
x.extend("This that".split())

is perfectly fine, but I’m not sure there’s a way to know that this is so just from the type signatures in the current type system.

If we have

class str:
    ...
    def split(self) -> list[LiteralStr]

then there’s nothing in the type signature itself to gurantee that we aren’t creating another reference to the result as a side effect, for example we could in principle be assigning the value to some global, call it

last_split_string: list[LiteralStr]

before we return it from str.split.

And if we did, then allowing us to assign the return value to a list[str] could cause problems, because we’d then be able to inject non-literal strings into last_split_string; in some scenarios that could lead to a security vulnerability which is one of the main use cases of LiteralStr off the top of my head.

If we had a way of marking the result as an “owned” list, meaning we guarantee that the return value is the only reference to the list, then we could treat it as covariant. I’m not actually sure whether str vs LiteralStr is an important special case here, I think the same considerations would pop up with any subtype relationship, e.g. an owned list[int] could in principle be assigned to a lis[int | str] safely.

Under the hood I think it’s possible some checkers are already doing something like this for the special case of literals, which are always owned; this is one of the ways that a type checker could know that

x: list[int | str] = [5]

is okay, even though naive type inference might conclude that there’s a variance issue.

To do the same for a non-literal value like "this that".split() is trickier because we have to pull the type from a type declaration, and there’s currently no way to specify that the returned value is the only reference to itself.


Hopefully that made some sense. I’ve been thinking a bit about this because Pyre currently deals with literals by treating one level as covariant, but our solution isn’t composable so we still throw errors on nested literals, e.g.

# this is okay, we have a "weaken mutable literals" hook to treat it covariantly...
x: list[int | str] = [5]  
# ... but hour hook only goes one level deep, so we choke here
y: list[list[int | str]] = [[5]]
2 Likes

Good point there. Showing it with list.append and a single literal is unambiguous at least (well, at least if a type checker understands that str, int, enum values, True, False, and Ellipsis are all immutable), but maybe we need something in the type system to communicate that the function is returning the only reference to an object it creates to handle parts of this as well.

In that case, I don’t understand though why Literal and LiteralString are special. If the type checker knows that .split() always returns a new list, isn’t it also safe to allow x: list[object] = some_string.split()?

Yes, I think that the underlying issue is not related to str vs LiteralStr per se, it’s entirely around the fact that in certain context it could be possible to upcast otherwise-invariant types.

I’m not proposing we actually add ownership to the type system, just observing that it could enable a lot of common (and safe-at-runtime) patterns where people get invariance complaints today.

Adding ownership would be a big change, so we’d have to see huge benefits. A big drop on false positives would be one benefit, but I doubt enough. Maybe if GIL-less python leads to a lot more concerns about thread-safety this would become more likely worthwhile.

The case with list was obvious-ish, but using list.extend made showing why constraining on value should be different from constraining on type less clear.

from enum import IntEnum

class Status(IntEnum):
    active = 1
    postponed = 2
    finished = 3
    failed = 4
    

class SomeGeneric[T]:

    def __init__(self, obj: T):
        self._wrapped_obj: T = obj

    def get_original_obj(self) -> T:
        return self._wrapped_obj

x: SomeGeneric[int] = SomeGeneric(Status.active)

The ownership bit would help with mutable containers, but there are places where the value constraint doesn’t matter to what is happening. This would become significantly more apparent if we had more refinement types like

in some easily expressible form like:

type u8 = int v where v >= 0 and v <= 255

rather than hacked together with annotated and CFFI apis were typed accordingly, but it still comes up today with enums as well as string and int literals in a few places.

I think your example code block is missing something, like creating an instance of SomeGeneric? Otherwise I don’t see what it’s supposed to point out, it’s just ambiguously a type error because Status and SomeGeneric are incompatible no matter what the result of this discussion is.

And even if we add the call (i.e. SomeGeneric(Status.active)), isn’t this still clearly ok? SomeGeneric[Status] is assignable to SomeGeneric[int] since SomeGeneric isn’t mutable (within the visibility of the type system at least)

Fixed, sorry.

I’ll take some time to decouple real-world code impacted by this from information I can’t share later, the toy examples aren’t doing this justice since where this is enforced right now by existing type checkers is slightly inconsistent. There’s nothing currently in the specification that allows:

some_list_of_str.append(some_LiteralStr), and special casing that’s being done here to allow this in an unspecified manner is hiding some of the problems with this not being specified.

There’s nothing currently in the specification that allows some_list_of_str.append(some_LiteralStr), and special casing that’s being done here to allow this in an unspecified manner is hiding some of the problems with this not being specified.

There’s definitely no special-casing being done in pyright for this particular case, and I don’t see any reason why it shouldn’t be allowed. This case is no different than appending an int value to a list[float], which is perfectly valid.

def func1(x: list[float], y: int):
    x.append(y)

def func2(x: list[str], y: LiteralString):
    x.append(y)
1 Like

If it’s not special casing, then it’s still not following the spec currently there. The current behavior with that being allowed is desired and useful, but isn’t appropriate for appending to a list off of only subtyping behavior (hence wanting to improve language around it) As shown above, improving this specification would not help in some cases where we need a concept of a temporary assignment or some other “no references retained” mechanism, but there’s still some things underspecified here, possibly falling through the cracks due to other things falling through in similar spaces like:

This case is no different than appending an int value to a list[float] , which is perfectly valid.

That actually shouldn’t be valid and is different:

>>> def f(x: list[float]):
...     print([i.hex() for i in x])
...
>>> f([1.0, 2.0])
['0x1.0000000000000p+0', '0x1.0000000000000p+1']
>>> f([1.0, 1])
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "<stdin>", line 2, in f
AttributeError: 'int' object has no attribute 'hex'

appending an int to a list of floats is type-unsafe.

That’s due to the special case where the type system considers int a subtype of float. Whether that implicit promotion is a good idea is debatable, but I don’t think that’s what this thread is about.

In general it’s safe to append() an instance of a subtype to a list of a base type.

Perhaps I am missing something here. I’m not in disagreement about this, but I do disagree that this is what is currently specified, at least as I understand it and could find references for.

While I’d like for any effort on this to be capable of improving some of the thornier cases, above comments showed we need more than that here, but I would like to clear this up independently from that for the sake of a clear specification considering I think there’s potential that not doing so may make future additions some people are working on much more complex to reason about.

I think I need some time to work out specific examples and reference what specifications we currently have on this, my apologies in advance if this is better specified than my understanding and I’ve just missed something, will follow up later.

Can you elaborate? Are you referring to the typing spec, or some other specification (like the official documentation for the list class)?

I’m not aware of anything in the typing spec that would preclude this. Consider the following example:

class Parent: ...
class Child(Parent): ...

def func(x: list[Parent], y: Child):
    x.append(y) # No type error

The method append is defined as follows in the list class:

class list(MutableSequence[_T]):
    ...
    def append(self, __object: _T) -> None: ...

When a type checker encounters the expression x.append, it binds the method to list[Parent] and partially specializes its type to (__object: Parent) -> None: ....

Passing a Child object to this call is not a type violation because Child is a subtype of Parent.

pep 484 specified:

By default generic types are considered invariant in all type variables, which means that values for variables annotated with types like List[Employee] must exactly match the type annotation – no subclasses or superclasses of the type parameter (in this example Employee ) are allowed.

pep 695 did not amend this, but did elaborate on more.

The current living specification claims:

By default generic types declared using the old TypeVar syntax are considered invariant in all type variables, which means that values for variables annotated with types like list[Employee] must exactly match the type annotation – no subclasses or superclasses of the type parameter (in this example Employee ) are allowed. See below for the behavior when using the built-in generic syntax in Python 3.12 and higher.

This specialization that happens there isn’t following what is specified as I’m reading it, even if we can reasonably say it is safe to allow.

This might be best split into multiple topics, there are actually 4 issues in this thread now that based on how things were presented, we’ve gotten some tangents that are related, but slightly making the discussion unfocused.

  1. The original bit on better specifying the rules around refinement types. This might be the least important one, if not then the second least important one.

  2. The ability to put into the type system “I’m returning the only reference to this” could greatly improve a handful of specific cases for developer ergonomics.

  3. The numeric tower is a bit flawed for subtyping behavior.

  4. The specification does not appear to me as matching the current behavior for list invariance. The behavior appears to be more correct than the specification to me, and most people seem to agree that the current behavior should be fine, this could just be a clarity in wording situation. I’m not advocating we change the behavior to the match how I am understanding the language of the specification.

1 Like

I think the fact that append accepts a subtype of _T matches what is specified. (Invariance refers to how list relates to _T, not to _T itself. If _T is str, str is not the invariant type.)

What you’re seeing in this specification is a lack of clarity about what “invariant” means. But the Python specification is not the only source about what “invariant” means. There are other sources to understand invariance, like wikipedia, or books on the Liskov Substitution Principle.

x: List[Employee] = y
What invariance means is that the type checker must see y as exactly List[Employee], not List[SubtypeOfEmployee]
That doesn’t mean that y can’t contain SubtypeOfEmployee

If List is not invariant (if it is covariant), that means y can be something that the type checker sees as List[SubtypeOfEmployee].

If the Python specification changed to make this more clear, it wouldn’t be changing what is specified. It would only be communicating a better understanding of what is specified.

2 Likes

I’ve had people repeatedly tell me not to use the academic definitions and to stick to the python ones. I agree that the academic definitions alow it. As I’m reading the ones specified though

By default generic types declared using the old TypeVar syntax are considered invariant in all type variables, which means that values for variables annotated with types like list[Employee] must exactly match the type annotation – no subclasses or superclasses of the type parameter (in this example Employee ) are allowed.

emphasis mine, “which means that” […] appears to define invariance.

The example specifically points the the inner T (Employee) and says that for all associated uses of it (Which to me includes where it occurs in bound methods) that no subclasses or superclasses are allowed.

I’ve come into a few of the older issues and been cross-checking with specification while working on a type checker to explore a few ideas with.

We should just clarify the spec here. That language is meant to say that list[Child] is not compatible with list[Parent]. It should be allowed to do list[Parent].append(Child()), as Eric explained above and as all type checkers should allow.

2 Likes

I agree that this wording does kind of makes it look like (in my example) y shouldn’t contain any SubtypeOfEmployee.

The wording could be improved.

If we want to rephrase the spec, I think one thing that might be making the wording confusing is the emphasis on “value” in

… which means that values(( for variables annotated with types like list[Employee] must exactly match the type annotation …

(emphasis mine)

I think the confusion can happen because static types aren’t actually properties of values, but of terms in a program. It’s true that types constrain what values the runtime should produce, but the two concepts are not 1:1 and generic values usually have ambiguous static types.

For example, the value [Child()] is perfectly consistent with the static types list[Child] or list[Parent], which is why a type checker with good inference will allow that literal to be .

But a function call returning list[Child], which will return a value that could have been typed as list[Parent] if it were a literal, is not compatible with list[Parent]. That’s a property of the term (the fact that it’s a function call means we can’t prove there aren’t other references to the same value) rather than the value being returned itself.

I’m not sure how to best word that, since talking about types as representing classes of runtime values is a great way to teach beginners about types. It just turns out that treating that heuristic as the definition of types is actually not quite correct once you get into generics.