Two polls on how to revise PEP 649

Not necessarily. Whether there’s a dedicated C API or Cython has to set a function attribute is up to you, IMHO. I don’t think Cython would mind either way (though I’m not a Cython developer, so someone else would have to confirm).

1 Like

“Just” is a bad word, sorry for using that.
IMO, the strings should be stored by the compiler, and at runtime they should be retrieved, rather than constructed.
As to where exactly to get them – the current __future__ annotations mechanism seems like an obvious choice. Maybe it can be improved; I don’t know enough about that area.

As for additional disk space – it’s a concern, but I don’t think it’s a deal-breaker.

4 Likes

Okie-doke, Petr, I’ll put you down as +1 for “Store The Strings And Write a Lazy Loader”. Anybody else who wants to vote for that, post a reply and say so.

2 Likes

When I initially proposed the idea of using a custom globals dict to evaluate the PEP 649 thunk in an “unusual” way, “stringization” wasn’t the main idea, it was an afterthought. The main idea was to help a use case like that of dataclasses, where preferably code like this should work:

@dataclass
class Node:
    next: Node

Could dataclasses just always use the string form of annotations to make this work? Yes, but it has some downsides:

  1. The dataclass Field objects will always have strings instead of objects for the field type (which numerous users have already reported as a dataclasses “bug” when it occurs with PEP 563, so it will certainly distress some people.)
  2. Dataclasses’ detection of typing.ClassVar and typing.InitVar wrappers will be less reliable, as it will have to rely on string matching (easily fooled by name aliasing, e.g. from typing import ClassVar as CV) instead of object identity.

So I think that the ideal solution for dataclasses is neither normal evaluation of PEP 649 (which would raise NameError on the Node example), nor stringized annotations. The ideal solution is a hybrid one: evaluation of the PEP 649 thunk with a globals dict that returns some type of ForwardReference object from __missing__ (it could just be a string, but a ForwardReference object could carry along more metadata, like the module globals dict itself, making it easier to later reify the ForwardReference to the real object if needed) but otherwise returns the real object normally.

I also don’t think dataclasses is peculiar here; any library that uses annotations at runtime (e.g. something like pydantic, or other similar libraries I’ve seen in closed-source code) could have this same use case, if it wants to support self-referential annotations. So IMO this “hybrid” behavior is at least as important/valuable to support as the “stringized” use case; arguably more important. (Full stringization is useful I guess for documentation tools? But those tools could also just go back to the original source code instead.)

So I don’t have strong feelings about how (or even whether) we provide fully stringized annotations, but I do think the hybrid mode should be supported. Whether it’s supported as a built-in option for typing.get_type_hints or inspect.get_annotations isn’t critical; I think it probably should be, but if it isn’t, then IMO at least the existence and signature of __co_annotations__ should be documented and supported, so that the hybrid mode can be provided by other libraries without using undocumented internals.

3 Likes

Yes, this is already part of the proposal. Please see the last bullet point in the “Final random notes” section of my original post, in which I describe what I call “mixed” mode. “Hybrid” mode is probably a better name.

2 Likes

Oof! My apologies, I don’t know how I missed that last bullet point :frowning: What you describe in the bullet point sounds great!

In that case I’m quite happy with the proposal, regardless of which approach is chosen for strings :slight_smile:

Carl

I voted for hard coding the Stringizer assumptions about iteration, but there will be a few other points for the PEP to consider in going down that path:

  • caching a whitespace-normalised-and-comments-stripped version of the string on disk can go in a “possible future optimisations” bucket rather than being part of the initial proposal
  • will calling bool on Stringizer instances raise a Runtime exception? Or return True? (like any non-empty string and most type expressions)
  • will control flow expressions that Stringizer instances don’t override generate Syntax Warning messages from the compiler (or even be deprecated outright?)

(Personally I lean towards “No runtime error, compile time Syntax Warning” for the last two points, but I think the default state of “No runtime error, no compile time warnings” would also be fine)

1 Like

I don’t think Stringizer instances are special enough to “break the rules”, as the Zen says. So, yeah, bool on a Stringizer should behave like bool on the string.

In my most recent Stringizer prototype, I made it a subclass of ForwardRef. I think that’s a good way to go. In fact, that’ll probably influence the API. Do you want “real values for everything”, “ForwardRefs for everything”, or “real values when they’re available, and ForwardRefs when they’re not”?

Let me confirm we’re having the same conversation. When you say Stringizer, you mean the class with all the dunder methods, right? In that case, I’d say it’s not that it doesn’t override them so much that it can’t override them. It’d have no insight into these control flow exceptions. They’d be compiled by the compiler, and executed by the Python runtime, and the Stringizer have no ability to intercede. How could it work otherwise?

And yes, your instinct is correct here. If there was an annotation

def foo(a: str if undefined_module.symbol else int):
    ...

then when using the Stringizer this would evaluate to Stringizer('str') in pure-Stringizer mode and str in mixed mode.

If you’re asking “what would the custom bytecode interpreter do about these control flow expressions”, naturally, it would either run them correctly or throw an exception.

1 Like

My gut feeling is that using a string as the annotation object is not desirable. The proper and clean thing to do is return a nested data structure, like what PEP 649 does. Using a string was only done as way to solve performance issues and to handle forward references. PEP 649 mostly addresses the performance. If we can solve the forward reference issue, perhaps we can phase out the “Stringizer” and the code that consume stringified annotations.

A possible idea for forward refs, something like:

Node = ForwardTypeRef()

@dataclass
class Node:
    next: Node

The ForwardTypeRef could lookup Node from globals when the annotation is used, so you have the defined Node. For more complex cases, ForwardTypeRef could have some kind of hook that when called would return the type object to use. I think that would be powerful enough to handle circular refs.

3 Likes

The use cases for stringized annotations have grown since the original PEP. One use case I need to support is automated documentation generation, where they literally want strings, and they literally want the original expression (or as close as makes no difference). Even when all the names are defined, and we’re able to evaluate the annotation, the repr of the value may be far less readable than the stringized original expression. As an example, 0.75*math.pi is far more readable than 2.356194490192345. (You presumably wouldn’t use this value as an annotation, but you understand the principle involved.) There are some type hints for whom the repr expands into a shaggy long thing peppered with [T]s, and where the original expression was much more concise and readable. If we can support this use case, we should, and I think we can.

Note also the proposed “mixed” mode, which isn’t a nested data structure so much as it is a computed value with placeholders for undefined values. So for example in “mixed” mode the annotation dataclasses.InitVar[UndefinedType] would resolve to a real InitVar object wrapping a Stringizer object whose name was UndefinedType. (This happy example happens to be a form of nested data structure, too.) This solves problems for some other consumers, e.g. dataclass can answer the question “is this an InitVar?” with an isinstance check, rather than parsing the stringized annotation. I get the impression this is what you had in mind when you talked about strings being an undesirable format. So maybe “mixed” mode works for you here?

One nice feature of my proposed approach is that the user asking for the annotations dict can specify the format they want. dataclass can ask for “mixed” mode, automated documentation tools can ask for “strings only please”, and runtime consumers of annotations can ask for “real values only please”. So far I’m not aware of any use case that isn’t amenably covered by one of these three options.

I don’t understand the mechanics of how this would work in Python, as you’ve sketched it here. I assume you’d have to at least pass in 'Node' to the ForwardTypeRef declaration so it knew its own name. (Or maybe you are thinking ForwardTypeRef would be in the language, and under the covers it would pass in the name?)

This would still fail in some use cases. For example, the ForwardTypeRef object we labeled Node and the class Node would be different objects. So e.g.

    inspect.get_annotations(Node)['next'] is Node

would return False, and you’d presumably want that to be True. Similarly, they also wouldn’t hash the same; even if they had the same hash value, dict and set use is (or, really , == in C) to test if two objects are equivalent.

I made a similar proposal for a forward declaration last year. In my proposal the forward declaration was a real type object. It started out life uninitialized, only knowing its name and base class(es), but not having called any[1] dunder methods yet. You could then later “re-declare” the type using a standard class statement, at which point all the usual initialization would happen. I assert this proposal is technologically viable, and would solve a lot of problems with circular references and undefined symbols a la if TYPE_CHECKING. But, as Eric V. Smith puts it, the proposal “didn’t go over well”. So I assume any proposal involving explicit declaration of forward references isn’t politically viable.

[1] I don’t recall the specifics, there may be some class object initialization function that had to be called at that time. Given the fate of the proposal I don’t think the details are important.

2 Likes

(snipped the first part about bool() where Larry and I agreed on how it should work)

It wasn’t the annotation evaluation part I had questions about: it was the compilation part. What should the main compiler do about annotations with control flow constructs where forward references won’t work? There are a few options:

  • Do nothing (default option). If it works it works, if it fails it only fails (or produces a surprising result) when evaluated at runtime. Static type checkers may still complain about it being an unsupported annotation.
  • emit Syntax Warning for such constructs if they involve an unbound reference
  • emit Syntax Warning for such constructs in all cases
  • actively deprecate such constructs

After laying out the options like that, I changed my preferred option. I now think the “do nothing” default is the clear winner, since type checkers will already complain about annotations that they don’t understand, and that includes the cases where the lazy annotation machinery can’t override control flow expressions.

1 Like

Oh! Your question is really “should Python make these bothersome control-flow constructs illegal in annotation expressions?”. Answering that, I agree, it shouldn’t. They’re only a problem for the full-stringized and hybrid-stringized modes, and if we change the compiler to store the original strings in the .pyc, it wouldn’t even be a problem for full-stringized anymore. Plus, if we’d previously made flow control constructs illegal in annotations, we might have made tuple unpacking illegal, and then we wouldn’t have gotten the variadic generics PEP. I say, let a dozen flowers bloom.

2 Likes

I don’t understand the mechanics of how this would work in Python, as you’ve sketched it here. I assume you’d have to at least pass in 'Node' to the ForwardTypeRef declaration so it knew its own name. (Or maybe you are thinking ForwardTypeRef would be in the language, and under the covers it would pass in the name?)

Yes, it would either have to be “magical” somehow or have different syntax so that it knows the global name is Node. I think it is justified to add new syntax to the language, so we can solve this in a clean way. Using strings as we do now is expedient and fairly effective but quite ugly, IMHO. With strings, tools like type checkers have to do some guesswork to figure out what is actually meant, e.g. how to resolve the name. It should be well defined how to resolve the type objects and we should not pass the responsibility along to the tools using the annotations. Even though that’s the easier thing to do.

This would still fail in some use cases. For example, the ForwardTypeRef object we labeled Node and the class Node would be different objects. So e.g.

    inspect.get_annotations(Node)['next'] is Node

would return False, and you’d presumably want that to be True.

My half-baked thinking was that get_annotations() would resolve the forward references so that the is Node would return True.

For the documentation use case, e.g. 0.75*math.pi, could we do something like is done for tracebacks? I.e. have the annotation object store information about the source lines and offset of it? Would be kind of like a lnotab for annotations. Maybe that makes the parser and compiler too complex. We already have that information in the AST though. So it should just be a matter of transfering it to the annotation object that gets created.

1 Like

@nas, to use your words, your proposal is still “half-baked”. To strike at the root of the matter: what problem are you solving? What’s the use case for your proposed representation, that isn’t covered by the three representations being discussed here (real values, strings, and “hybrid”)? So far you haven’t cited any, merely a “gut feeling” that the representation isn’t “desirable”.

I’m happy to talk about it further. But I think you need to start from first principles. Not a vague dissatisfaction with the solution at hand, but a concrete problem that the proposed solutions don’t adequately solve.


For what it’s worth, the whole “what about a nested data structure” idea isn’t a new one. Mark Shannon has been needling me with that on and off for eighteen months. So, back when I was prototyping all this, I went down that road. The first step: what should the data structure look like? Then I realized, we already had a nested data structure in Python that would work great: ADT. So I made a prototype that produced lovely ADTs from __co_annotations__ functions. The ADTs could then be compiled, and when you executed the compiled code it produced identical results. (I didn’t bother annotating them with lnotab information, but it seemed doable, if a PITA.)

However! In my test suite for the prototype, I would compare the output to what I was expecting, which meant writing lots of ADTs by hand and comparing them. I quickly found that writing and examining ADTs was cumbersome–they get deeply nested pretty quick. Now, it’s possible that ADT is particularly bad in this regard, and a bespoke data structure could be nicer. But… not that much nicer. Python is a rich, expressive language, so naturally any structure that can represent Python expressions will have to be rich and expressive too, which means it’ll be complex.

So I returned to first principles. What problem am I trying to solve? It seemed like there were three use cases for examining annotations at runtime:

  • People who want to use the real values for something. That’s the runtime introspection folks, like Pydantic and FastAPI. Those folks need the actual values, and for them everything needs to be defined in advance, which means lazy evaluation is only marginally helpful for them anyway.
  • People who want a simple identity check, that works even when some symbols are undefined. That’s folks doing super simple “is this the correct type” checks at runtime. Those folks are well served by a consistent string representation.
  • People who want to do more sophisticated analysis, to answer some arbitrary question(s) about the annotation. But this has to work even when some symbols are undefined. The classic example here is dataclass, which wants to know whether a particular annotation represents an InitVar or ClassVar type hint. With real values, it could do an isinstance check; with stringized values it had to parse the string.

It’s this third use case leads people to propose “some sort of nested data structure”. So I thought about that: what would be the nicest nested data structure for people in this third use case? I finally realized, the nicest possible thing would be–drumroll–real Python values. A real InitVar instance is much nicer than some clumsy abstract object that represents InitVar. And looking up a real attribute is nicer than a node that represents an attribute lookup. So if I could use real Python values when possible, and placeholders when real values weren’t available, that seemed best. And that’s how I arrived at “hybrid” mode.

This annotation in the source code:

dataclasses.InitVar(undefinedmodule.MyType)

would be represented in “hybrid” mode by this:

InitVar(type='undefinedmodule.MyType')

which is so much nicer to deal with than something like this:

ADTCall(ADTAttr(name=ADTSymbol('dataclasses'), attr=ADTSymbol('InitVar')), args=[ADTAttr(name=ADTSymbol('undefinedmodule'), attr=ADTSymbol('MyType'))])

One final observation for you. It turns out that a function object really is an excellent representation for Python code. That sounds tautological, but what I mean is, I found in my experiments that I could reverse the Python compilation process to any previous point. I could de-compile a (simple) function object back into its ADT, or into its source code. Or I could build these crazy “hybrid” values. I felt like I could produce any output I wanted.

So, even if we later decide that the three representations for annotations currently being proposed (“real values”, “strings” and “hybrid”) are insufficient, I’m confident that we can start with the __co_annotations__ function and produce any representation we need at runtime.

2 Likes

Hmm, if the primary use for string-form annotations is documentation and the like, wouldn’t it be straightforward to simply store the original string, and then have a way to throw them away with an optimization flag (perhaps -OO like for docstrings).

That would be a) easy, and b) most useful, as the original text is preserved – a mismatch between the code and docs would be weird / confusing.

So you’re only interested in supporting the primary use case? Stringized annotations are a bad solution for some use cases, e.g. dataclass. My best idea on how to support those is what I called “mixed” mode (later “hybrid” mode, which is probably a better name). Since “mixed” mode depends on a Stringizer or custom bytecode engine, and we can also use that to produce the stringized annotations, it seems more straightforward to use that for both forms of output, rather than storing the strings for stringized annotations and also using a Stringizer or bytecode engine for “mixed” mode.

Regardless, I’ll put you down as +1 for “Store The Strings And Write a Lazy Loader”.

As for your specific question, I recommend reading messages earlier in this thread where Petr and I discuss storing the strings, and the relative straightforward-ness of this approach.

There’s already a hypothetical mismatch. PEP 563 stringized annotations are produced by reconstructing the source code from the ADT, which can theoretically diverge from the original text–comments and whitespace aren’t preserved. But I don’t recall anyone complaining about this, presumably because the reconstructed text was fine. Folks don’t tend to put comments or funny whitespace in their type hints.

Reconstructing from bytecode could theoretically diverge even further, as the bytecode could involve additional transformations, e.g. optimizations like reduction of strength or constant folding. But this seems unlikely. Folks just don’t write type hints that would be affected by these transformations.

1 Like

I do think at minimum there is some additional value for the “hybrid” representation if it isn’t just a string, but rather “a string and the (real) globals dict I failed to resolve it in.” This would make it more reliable for someone introspecting annotations as real objects to (at some later point, when maybe forward-referenced things are now present) try again to resolve the placeholder to the real object. Doing this with only a string is unreliable (as we’ve already seen with PEP 563), given that even some parts of the stdlib (TypedDict, for example) have a habit of sometimes copying annotation values from an object in one module to another module, losing their implicit module globals context.

Since all of a given annotation always comes from a single module, I think this use case is adequately covered if a Stringizer also holds a reference to that single module globals dict; then reification (even for more complex stringized annotations) should just be a matter of evaling the string in those globals.

Of course one can imagine a deeply structured representation that one wouldn’t have to eval, but I also have a hard time seeing the concrete benefit there, given that then you’d just have to invent a custom mini-interpreter to resolve the structured representation back to the referenced object instead.

1 Like

I’m not sure this is a real use case. The folks who want to examine annotations as real objects don’t seem to have the forward reference problems. After all, they’re using the annotations at runtime, rather than just as static type hints. This suggests that they need the code behind those annotations. So I’m pretty sure they don’t use undefined annotations. (I dimly recall that the runtime type guys suggest their users leave stringized annotations off, though a quick google doesn’t find any evidence of that.)

On the other hand, if this is a real use case, at the point the forward-referenced things are now present, they could simply re-request the annotations, which would now be correctly rendered. I think that’s better than inventing some complicated new format that permits the user to do the evaluation themselves.

Because this would necessitate some abstract nested data structure, e.g. ADT. Take my example of dataclasses.InitVar(undefinedmodule.MyType). Let’s say “hybrid” mode produced a real InitVar instance that contained a reference to a ForwardRef, as I described above. But now you’ve imported undefinedmodule, and you want to update those ForwardRefs. At that point someone–presumably the user–would have to write bespoke code that pulled out the ForwardRef from the type attribute, calculated the real value, and updated the type attribute. Maybe you could write code that recursively brute-force examined all attributes (and sequences? and mappings?) and replaced ForwardRefs with real instances–but I wouldn’t want to add that in the Python library.

Also, simply re-evaluating the annotations from scratch seems safer. What if InitVar.__init__ reacts to the type passed in, changing something about how it configures the object? You’d either have to know that about the object and fix it by hand too, or suffer a mis-configured object.

1 Like

Recursive models in a library like pydantic are a good example of this use case that I have come across a fair amount. The usual way to handle this is to avoid evaluation of annotations until later. In pydantic case there is no need to evaluate annotation when class is defined. You can evaluate them later when you use model for parsing/validation. Similarly I have json type reflection library. For handling recursive types approach is don’t evaluate annotations until first time serialize/deserialize is called later after all relevant classes are defined. At moment this logic currently relies on at time class is defined as model, save globals/frame for later. When serialization/deserialization is then used eventually, call get_type_hints to evaluate annotations with saved globals.

There’s section here in pydantic documentation on this use case.

Maybe the re-request annotations paragraph covers this approach. Was confused following that.

This seems like a good idea to me. People look at the source code and think “I under all of the annotations”, but that’s because in their heads they’ve already “executed” all of the things that were forward-referenced, often including the class itself for nested data structures.

At first blush this addresses the problem where dataclasses reifies the annotations (because that’s good enough for what it needs), but someone else (pydantic, maybe?) needs to see the annotations after a module/class was more completely defined.

Would the re-evaluation be automatic, or need to be explicitly requested? I think automatic would of course be nicer, but I’m not sure how you could cache the results then, in order to avoid re-evaluating every time annotations were examined. Or maybe annotations aren’t examined often enough to care, and they get re-evaluated every time.

1 Like