A more useful and less divisive future for typing?

Formalizing the parts of the type system that aren’t, explicitly choosing the set-theoretic model of subtyping (notably, this is entirely compatible with the current subtyping behavior that type checkers use, but is not specified, but there are other definitions which could be chosen that would have different outcomes), clarification on the definition of “Any” in python typing, as the ability to subclass it that was added recently isn’t well defined, and possibly either an explicit auto or Unknown if we don’t want the behavior of untyped code to change for existing type checkers, or something accepting that type checkers may rely on inference for more than they do. If auto/Unknown are wanted, this would make the kind of inference I’m describing opt-in, but still relatively easy.

I’m optimistic that it’s possible and can describe a path that could get us to it, but I don’t think it’s an easy path, and it would require existing type-checker buy-in to not fragment the typing ecosystem. If more is “just detected” and some type-checkers aren’t buying into this, it’s going to lead to conflict between libraries that would prefer to use a type-checker that does this and the ones that would stick with the status quo. Maybe that’s okay long term and the ecosystem would self-resolve to what is more ergonomic and useful, but I don’t think it would be good for end users of the libraries on differing type-checkers in the short term without this actually being an intent to be supported by the type system.

Both of these points also feed into the desire for formalization that’s been expressed here, and for end-user stability in typing here, that is, even if this is something that would be a large improvement for most users, we can’t just up and do it either.

Alternatively, you could say that a proof of concept implementation that assumed the changes to the typing system which are needed, would make an excellent demonstration of the power of this alternative approach. I imagine it would be much easier to get buy in from the typing community and the existing type checkers if there was working code that showed the benefits, as opposed to simply trying to sell the community on a potentially useful, but somewhat theoretical-sounding proposal.

Of course, I don’t have the first clue how much effort would be involved in writing such a proof of concept, so maybe this is an impractical suggestion.

2 Likes

I would struggle to give an accurate estimate here, but my attempt is that this would be 16x40hr weeks for me specifically as a single developer and that it would not be work easily dividable such that it decreases linearly with more developers. Anyone else working on this would need a prior understanding of type theory or add in time to study what they are missing.

It’s not impossible for someone to do it more quickly if they had a good enough understanding of an existing type-checker (for python or otherwise) that they knew enough to know where to tweak it to get the desired behavior, but I don’t have that level of deep understanding of the code base of existing type-checkers.

Eric has mentioned elsewhere that pyright already constructs a virtual intersection internally in some cases, so if someone were to go this route, it would seem pyright might already have some of the mechanisms required for this, and that it would just be about leveraging it in more places + handling creating inferrable protocols, but I’m not sure.

All of that is to say, it wouldn’t be a small undertaking and I don’t know if I’d be able to put that much of my own free time into this.

I don’t think this would be the only potential benefit of formalization and intentionally choosing self-consistency in the type system though, but this is an example where self-consistency as a goal could enable more long-term. I think we can appropriately move more of the complexity of Python’s typing into tooling and away from most library authors, application developers, and end-users. I’ve seen many people argue that making it easier and being more formally correct are opposing goals, but I think many are predicting these being opposing goals on that being more formal means users need to be the ones more formal, rather than that tooling could use the consistency in the system to know more at that level.

I’ll try and think about this more later to see if there’s either an easier path to show that these are being incorrectly assumed as opposing goals or if there’s a way I could work on this that would require less time.

2 Likes

But what form would that take? Are you envisioning changes to what is provided in the typing module? Or just documentation?

Perhaps I’m demonstrating my ignorance here, but my understanding is that typing as provided by Python (not mypy or other third-party tools) is essentially nothing but those two things: the typing stdlib module and documentation (in which I include standards such as those promulgated as PEPs). What I’m not understanding is where what you describe fits into that. Are you envisioning a PEP that says something like “we hereby adopt the set-theoretic model of subtyping”?

I’m not sure I see how that’s avoidable anyway. Even if some declaration were made, it can’t change what people are already doing with types. And it seems unlikely that existing tools would suddenly switch to a new way of doing things. So there will always be some period of coexistence of new and old ways.

This isn’t to say that what you’re describing is impossible, but just that I don’t see how the way to bring it about would be different from the way similar changes happen in other parts of the Python stdlib: someone makes a library called better_typing.py that defines some new set of types and/or type analysis tools, and someone writes a better typechecker that uses that system, and then eventually everyone realizes it’s so good it gets brought into the stdlib (or the existing typing tools are adapted to mimic it). To me, what you’re saying sounds more like you want some kind of official declaration that everyone should henceforth do typing a certain way. Am I misreading you?

So would the same amount of effort be needed if your desiderata above were realized? Would the 640 person-hours of work you describe be in effect a “down payment” on the work required to bring the system to full fruition (as opposed to being “wasted” on a proof of concept that can’t be fleshed out into a full-featured tool)? That sounds like a good deal of work! And if that amount of work is required just for a proof of concept, it seems like the benefits of the changes would need to be quite large in order to make it worth it to implement the full system.

Plus, personally, I think I’d be a little leery about a proposed change to Python itself at a stage where things are still 640 person-hours away from even being able to demonstrate the benefits. That sounds to me, again, like a case where the normal “put something on PyPI first” approach is more warranted — which is slow, but thereby allows things to proceed on a parallel track without impacting core Python functionality (or docs) until the gains are clearly established. Are there any existing Python typing tools that give us a glimpse of what you’re saying would be this better future?

1 Like

+1 on that.

I’d specifically disagree with Michael’s assertion about “Some preople …, and people who …”.
I clearly belong to 3rd or 4th class of Python users: I use type hints as hints. Python typing is whatever Python object model allows, and folks, myself included, traditionally follow duck typing in their code. I’m happy to add type hints to my code, but that doesn’t mean that I want things to be stricter. ¯_(ツ)_/¯

2 Likes

Specification (language level, currently just a collection of PEPs) describes the type system, type checkers (various) then choose what to do with them. There’s a bit more on how the standards haven’t actually spelled out everything that’s important or relevant to a type system, and good reasons to not have diverging behavior here, at least not largely diverging.

No, Anything I wrote here wouldn’t necessarily translate into the codebases of existing type-checkers.

Well, the changes I’d need are small and non-breaking, and only describe the type system, not runtime behavior. There’s no change to CPython needed, only to the specification of typing behavior. These changes are only to make one behavior specifically better defined, and specifically allow more from the type system.

  1. Define subtyping for Python using the set-theoretic model. This model is 1:1 compatible with how type checkers currently treat subtyping but also covers cases the type system does not yet consider, such as Intersections. (Non-breaking, but gives a stronger foundation for future work)

  2. State that type checkers are allowed (not required) to make complex type inferences based on consistency of usage even in untyped code and narrow Any based on observed use.

  3. Type checkers may (not must) detect incompatible use of Any due to inconsistent use.

For people not using a type checker, nothing changes. For people using a type-checker, they may get more errors detected if the type-checker they use can suddenly detect more issues. It would be on type checkers to continue finding the balance spoken about with useful errors vs false positives spoken about above :smile:

Those are the only changes needed here for this, and I think that these two being specified is a good thing even if this never materialized fully. It’s one possible benefit from actually having definitions that are stronger and enable more.

Having an agreed-upon definition for subtyping is pretty important for a type system (we don’t have one of these currently)

The second one opens the door for type checkers to do more with untyped code for users without it violating if not the letter of specification, the original spirit that did not seem to envision there to be a good case to.

I think people overestimate what the required changes are for this, and it’s not even my primary goal in this discussion. It is to point out potential benefits we can gain over time by having better definitions and consistency in those definitions, and that stronger definitions do not inherently conflict with the other end of the spectrum on the usage of typing.

2 Likes

I’m sorry, I was on vacation last week and have had to skip some discussions. In this thread there’s a lot of reference to a “set-theoretic” type system for Python (IIUC it is Michael H’s proposal?) but from the one example given (about a function that uses x = x+1; print(x)) I cannot understand the proposal (other than that it might be going back to duck typing, and proposes a technique that AFAIK is used by pytype). Maybe someone can link me to a thread or blog post where this is discussed in more detail?

The example was mine, and it was meant to counter the idea that making the type system more “set theoretic” would mean people would have less need to deal with complex types. My intention was precisely to make the point that type inference isn’t going to be able to handle duck typing, so there will always be a need for people to explicitly write types, and therefore being able to express “reasonable”[1] type constraints concisely and understandably is important.

Sorry - my example wasn’t particularly good, and my follow-up didn’t help much as I rapidly got out of my depth with the talk of set-theoretic type inference…


  1. whatever that means! ↩︎

I would say: Go for it. Why not? Put up a description of the system or, better, an alternative, improved implementation of the typing module (fixing some of the inconsistencies you have pointed to on the GitHub). Then everyone can determine whether this actually works in practice and improves the current system.

The current discussion seems too abstract (too futuristic?) to me. (It’s getting close to the level of abstraction in David Beazley’s PyCon talk - 2021 - “No, not typing. Types.” Subtitled “a tone poem” - You can find it on YouTube. Scroll to about 37:20 for the TL:TR “What was that?!” indeed :slight_smile: It’s a very interesting and amusing talk, definitely relevant to “typing”, I think, though I cannot really say I know what it was about, even after watching the whole thing.)
(Beazley also makes some interesting comments about exceptions, asked about a comparison with exception handling in Rust: there is this strange thing that exceptions in Python seem to fall completely outside the typing stuff – at the very end of the talk: 46:40.)

1 Like

I’d like to go back to the very title of this post and dig into something that was said in an earlier exchange. Paul called out the distinction between app developers and library developers – and he was under the impression that library development had been ommitted as an important use case. It seems that was a misunderstanding, and I’m sure it can be cleared up further by diving deeper at a technical level. But. That exchange was born out of a very frequent situation, in which advocates of a feature or change in typing speak primarily from the perspective of application developers and ignore or do not prioritize the needs of library developers.

I recall this in particular happening with Unpack[TypedDict] for kwargs, and again happening in a small way with StrictTypeGuard/TypeNarrower.

In pursuit of a “less divisive” future, I would like to see PEPs more directly address library author use cases. That will also have the beneficial effect of socializing the norm that “library code is also relevant”.
That could be as simple as a statement that a feature does not have obvious utility for libraries, but we are hopeful for future usages, etc. Discussing the usages for typing behaviors in these two different contexts – application vs library – is, IMO, sure to be beneficial in almost all cases.

1 Like

I was talking specifically about @mikeshardmind’s “set-theoretic” suggestions (as I understood them).

In terms of general typing, I think library development is covered fine. Sometimes I think people forget that library developers might want to define a permissive (in a typing sense) API, and doing so involves fairly “advanced” typing features. I say this because I’ve hear people state that “you shouldn’t need complex typing features in typical code” (or words to that effect). But the intention to cover library usage is clearly there, and I don’t want to complain just because people have a different perspective than I do.

In terms of @mikeshardmind’s proposal, he seemed to be suggesting that type systems should be able to infer most things without the developer’s intervention. In saying that, I feel that he had missed the library use case, and my “duck typing” example was an attempt to point that out. As a better example (I hope!) I don’t see how it’s even theoretically possible for a type checker to infer a useful type for

def low_bit(x):
    return x%2

The library author simply has to state their intent if the function is to be assigned any sort of usable type. After all, def low_bit(x:str) -> str is technically valid, but clearly not the author’s intent, here… And that brings us back to the type system having to make it easy for developers to express their intended types.

2 Likes

In isolation that seems definitely possible to me - and @mikeshardmind also suggested a way to do so - kind of reminiscent of the way Traits work in Rust (though Traits are not inferred). You would not infer one given native type, but a set of conformant types or traits which itself would be part of the type algebra. But… If the type inferencer has to do this in the context of a complete program, where the input x also needs to be traced back, wouldn’t the general problem quickly lead to an explosion of exponentially many possible code paths? So, could we have a program that is both correct and fast enough? I doubt that very much. But that’s a challenge to Mike :slight_smile: (As far as I understood it the “intersection” problem would come into play here but I’ve a gut feeling that that maps to 3SAT or some other NP-complete problem…)

Despite this - having a type inferencer of some sort, even if not complete, or fast, might make tools like mypy more useful. But to see that, someone needs to write some prototype…

1 Like

To be clear, I said “useful”. I would intend low_bit("%d") to fail to type check, not to infer a return type of str. Making that distinction is what I consider “not even theoretically possible” as there’s no way a type checker could know that I don’t want to include string-formatting meanings of the % operator.

Maybe I’m not being clear enough. My point is that the developer has to be the one to state distinctions like this, and to do so without over-constraining requires complex types. The type checker can’t infer stuff like this. Maybe intersection types would make expressing things like this easier, but I’m not yet convinced - I want to express “integer-like type with a % operator which will take an actual int on the RHS”, not “Any minus str”…

As to why do I want to express this? The fundamental use case I’m thinking of is low_bit(x: int) -> int, but I don’t want to prevent people from using my library in ways I hadn’t expected. So I don’t want to over-constrain, because I don’t know what my users might want to do.

We’re very close to duck typing here, and honestly I’d be fine just leaving an API like this untyped so that duck typing can do its job. But people then ask for type annotations, so we come full circle. And at this point I’m very much repeating myself, so if you’re still not clear what my point is, we’ll probably just have to agree to differ.

1 Like

It would not need to do so. The type checker would see the modulo-operator is applied. So, for this any-type-that-supports-modulo is the inferred “type” (in Rust this is a non-inferred Trait). The type-checker tries to resolve those “types” in the context of the current program (however “program” is defined). For instance somewhere else it calls low_bit(x) where x is a list and it knows that lists are not part of that “any-type-that-supports-modulo”, then it can decide there is a conflict.
If lists would support a modulo operator, then they would not be excluded.

The thing is also, if you don’t want to restrict usage, then you just don’t add type-hints. No one is arguing for no longer supporting duck-typing. But you cannot have it both ways here (you can of course in Python, I’m just being emphatic). It doesn’t really make sense to me to say: “I’ll restrict this to ints”, and also “I don’t want to exclude strings”. (Already if you do this now, you’ll run into trouble with mypy.)

So, I regard this as an unfortunate effect now of typing, that a Python programmer can indeed give only type-hints for a particular type, but wants to also give people the freedom to ignore those. I know this works, due to dichotomy between static-typing and runtime, and know it’s baked into the current system. But when I see that an external big library (pandas, numpy) uses particular types, I do see them as a contract I have to follow and which I only ignore at my own peril…

But I have never seen a concrete example of a widely-used library, like the ones I mentioned, where the addition of type-hints did not intend not constraining the supported types for function arguments.

I’m not sure, but the phrasing of your response suggests you may be missing the fact that strings do support a modulo operator. "%d" % 2 is valid and evaluates to the string '2'.

How could I forget that :scream: I guess the heat of discussion… :slight_smile:
(Also the operators are actually different, of course, it’s just that they share the same representation.)
No matter, certain other examples could be given… I’ve updated my post to use lists.

I agree with this, and it’s kind of where I was heading with my earlier comments. @mikeshardmind said:

If that is the case, I don’t see why anything needs to be changed to support this new type theory. It could be implemented now. And if it is, and is more useful than the existing typecheckers, then great. But if it’s not, then the changes to the standard will just be retroactively perceived as misguided. Either way, I don’t see how adding it to the standard before seeing how it’s useful would help matters.

I think part of people asking for type-hinting here would also be solved in a future where this is inferred, because on the user end, their tools would no longer yell at them, they are using it in a way that works according to the detected duck typing. (Ideally, tools also wouldn’t show a generated protocol directly, they’d show “objects that support: %, +, /” (etc). I think this would be reasonably possible for tooling to do for protocols that only describe a builtin under or intersections of them in the future.)

I think this has been an interesting line of discussion and I need to type up some more thoughts into an ideas post, but it highlights how while typing has mechanisms to support duck typing, they aren’t ergonomic and require more.

In your case of not actually wanting to support the string formatting case, but overall wanting duck typing, there’s not even an answer to this currently, you’d need Intersections and type negation to do it properly:

SupportsMod & ~str, where SupportsMod is a protocol for % support

This is a solution that composes naturally from what you described wanting, but isn’t expressible currently, nor is it necessarily obvious to those who haven’t been in the depths of typing already.

Quite frankly, nobody should be doing that much work if there’s an opportunity that a different definition would be adopted while they are working on it that undermines that amount of work. There are benefits to having good definitions outside of such an endeavor, and it is to everyone’s benefit if we build things up in order. That means the foundational definitions should be addressed first.

I you set up a plan for that, or break down the task, I imagine there might be people here who’d volunteer to take on parts of that - even though it’s not coding :slight_smile: Trying to come up with just a formal description, and do this within an established larger theoretical framework, and consider both the internal consistency and how this might be used, seems already a pretty big undertaking to me.