PEP 767: Annotating Read-Only Attributes

100% agree :slight_smile: I did say that I agreed with you in the first line of my comment.

The only reason I brought this up was to guess where @Eneg was coming from. That kind of purism is, to me, a wish for a simpler Python world. Unfortunately that world doesn’t exist. Just trying to help explain the gap to interested readers.

1 Like

I sourced this restriction from C#’s implementation of readonly.

I don’t know much about C# but one thing I have just checked is that in C# a class can have multiple different constructors including private constructors. In Python you cannot have multiple different __init__ or __new__ methods in a single class and both of those methods are always public. That means that limiting initialisation to these methods is incompatible with any way of implementing a private constructor in Python.

1 Like

I’ve updated the PR to remove the restrictions on internal mutation of read-only attributes.

re: internal mutation

What should a type-checker say about this?

class B:
    x: ReadOnly[int] = 4

    @classmethod
    def foo(cls, b: "B", x: int) -> None:
        b.x = x


class C(B):
    x: ReadOnly[bool] = True


c = C()
c.foo(c, 4)
reveal_type(c.x)
assert isinstance(c.x, bool)

Actually, that’s over-complicated.
Simpler:

class B:
    x: ReadOnly[int] = 4

    def foo(self, x: int) -> None:
        self.x = x


class C(B):
    x: ReadOnly[bool] = True


c: C = C()
c.foo(4)
reveal_type(c.x)
assert isinstance(c.x, bool)

It’s not something inherent to ReadOnly, is it?

You can remove ReadOnly from those examples and the same problem surfaces.

For the record, I believe I’ve pointed this out in the initial PR (comment), and you even responded to it (comment)

1 Like

The difference is that pyright currently would reject that code without ReadOnly:

$ mypy q.py
q.py:16: note: Revealed type is "builtins.bool"
Success: no issues found in 1 source file
$ pyright q.py
WARNING: there is a new pyright version available (v1.1.403 -> v1.1.405).
Please install the new version or set PYRIGHT_PYTHON_FORCE_VERSION to `latest`

~/q.py
  ~/q.py:11:5 - error: "x" overrides symbol of same name in class "B"
    Variable is mutable so its type is invariant
      Override type "bool" is not the same as base type "int" (reportIncompatibleVariableOverride)
  ~/q.py:16:13 - information: Type of "c.x" is "bool"
1 error, 0 warnings, 1 information 

With a little modification both mypy and pyright will accept it though:

from typing import reveal_type

class B:
    def __init__(self, x: int) -> None:
        self.x = x

    def foo(self, x: int) -> None:
        self.x = x

class C(B):
    def __init__(self, x: bool = True) -> None:
        self.x = x

c: C = C()
c.foo(4)
reveal_type(c.x)
assert isinstance(c.x, bool)

With pyright the types are handled differently for attributes that are assigned in __init__ vs those that are declared as attributes explicitly in the class body. Personally I find this quite annoying because I want the type checker behaviour seen here with __init__ but I want to declare the attributes in the class body. In general I prefer annotating the attributes in the class body but also it is necessary when using __new__ rather than __init__.

It seems clear that pyright makes this __init__ exception because it is known how many false positives it would otherwise generate but I see these false positives all the time. The primary reason I want ReadOnly is in fact precisely to tell pyright to silence the error shown above. Of course it is useful to be able to signal to other code not to mutate the attribute but in practice I don’t find that other code mutating attributes that are not supposed to be mutated is a real problem.

Additionally I want to be able to mix class attributes, instance attributes and properties when ReadOnly and this is something that mypy and pyright both reject:

from typing import Literal

class A:
    is_thing: bool = False

class B(A):
    # mypy and pyright don't like this:
    @property
    def is_thing(self) -> bool:
        return True

class C(A):
    def __init__(self, is_thing: bool = False):
        self.is_thing = is_thing

a: A = B()

if a.is_thing:
    print("a is a thing")

Would the PEP allow that to work with is_thing: ReadOnly[bool]?

There are all kinds of cases like this where it is possible for a subclass to do something weird but worrying about that is misplaced. Right now it is not possible to put correct type annotations on correct code and have the type checkers agree that the code is correct. That is the primary thing that needs addressing.

Type checkers are most useful at checking things at the boundaries like function signatures and the class interface rather than checking the internals of the class. There should be some way to say “instances of A have a readable attribute is_thing” and the need for that is far more important than having type checkers try (inevitably in vain) to guarantee that no (often hypothetical) subclass could ever do something strange with the attribute.

1 Like

Ah yeah, overlooked variance here.

I think these examples show a general need for type checkers to track the usage of an attribute, and error on incomplete overrides (like @beauxq’s C class not overriding .foo).

Or perhaps we need better tools to indicate conn (oops, sent prematurely…)

Or perhaps we need better tools to indicate the relationship between an attribute’s type and any methods that work on that specific type.

I think a definition of the classes in the examples that better reflects their usage would look more like this:

class B[T: int = int]:

(Um, I can’t insert a newline? Hope you can see where this is going)

Yes, the Subtyping section states that a read-only attribute can be overridden by a descriptor or a normal attribute.

But in my example here, the override is complete.
That’s the difference from linked initial PR.

It looks like this problem comes up a lot more easily with internal mutation allowed.
I don’t think ReadOnly should allow the internal mutation.
I also think allowing internal mutation will be unintuitive for a lot of people.

1 Like

basedpyright doesn’t not accept it, because of this unsafety.

Imagine class B is in a library that the author of C doesn’t control.
Why would the author of B see any need to change it to your

class B[T: int = int]:

?
Do you expect that type checkers would tell someone who has implemented this

class B:
    x: ReadOnly[int] = 4

    def foo(self, x: int) -> None:
        self.x = x

that they should implement it as the generic instead?
If so, that should be in the PEP.
(I don’t think that’s a good direction to go. Better to not allow internal mutation.)

2 Likes

I don’t see what you mean by saying that the override is complete. I would say that your subclass is broken because it should have overridden foo as well:

class B:
    x: ReadOnly[int] = 4

    def foo(self, x: int) -> None:
        self.x = x

class C(B):
    x: ReadOnly[bool] = True

    def foo(self, x: int) -> None:
        self.x = bool(x)

This is the point about external vs internal though. A class encapsulates its data through the combination of all of its methods. It is unreasonable to expect that subclasses can override arbitrary combinations of methods without breaking things.

I don’t think that’s unreasonable. It’s very common (in every language with inheritance I think) to override some things without overriding other things.

Note that I referred to overriding arbitrary combinations of methods. If you are going to override the superclass and change the type of a data member (not even possible in many languages) then you are going to need to override other methods that access the attribute as well. It can be a valid to have a subclass like this but it should only be done when you are in control of both classes or if the superclass was carefully designed to allow this kind of inheritance and the subclass follows the specified inheritance contract (if you override A then you must also override B etc).

Not every way of subclassing a class is valid but it will never be possible for type checkers in Python to control that fully. What is more important is having a type system that supports writing reasonable code even if there will always be some holes. This is what I meant above by:

I acknowledge the value of something that would say an attribute is read-only externally, while still being mutable internally.

But there is also a need to have covariant overrides of attributes.

These 2 cannot be the same thing safely, and should not be the same thing.

You’re asking people to check manually something that can be checked automatically. That’s what type checkers are for.

They can be the same things if the classes are designed to work together. Generally I would say that all inheritance is dangerous if the classes were not designed for it and I don’t see type checkers ever being able to check that it is done correctly. There is much more to it than just the types because the code in a subclass fundamentally breaks the internal/external boundary of the superclass.

Even if an object is “immutable” there are still good reasons to initialise it in a method that is not __init__ or __new__. The problem with restricting internal assignments is that like many typing things in Python that is a design that disallows much existing valid code. The end result of that is not really type safety but just a proliferation of type: ignore.

Too often, people try to use the excuse that “type checkers can’t check everything” (or can’t check something more general), when type checkers CAN check pretty well the specific thing we’re talking about here.

To me type: ignore is a signal to that I should be suspicious of that code.
And I do think people should be suspicious of a method that changes a ReadOnly attribute.
Maybe the method was intended to be used only for initialization. But later someone changes the code to call the method somewhere else.
When someone is investigating this bug, the type: ignore can help bring attention to that.

1 Like