Thanks for writing this PEP! 
I haven’t double-checked that I’m not repeating something that someone else pointed out already. Apologies if I’m repeating something that’s already been discussed/mentioned. I am commenting on the PEP as written, having done a cursory read of the discussion here.
I want to push back on this statement, albeit because
nuances
.
pyproject.toml as a whole wasn’t “yet another standard”. It came from a place of needing to determine build-time dependencies generically without needing code execution, a problem that we had no existing solution for already.
PEP 518 has an extensive rationale for why a novel approach and new file was proposed for it. PEP 517 has a similarly long rationale for enabling an ecosystem of build-backends, rather than forcing everyone to use setuptools which had proven difficult to evolve.
(I’m likely biased, since packaging changes tend toward really long rationales for design choices, but I was surprised at how short the rationale section is in this PEP
)
The piece within this that I agree on being “one more way to do things” is the [tool]
table within pyproject.toml. It originated as “let’s keep build configuration and allow build tool-specific configuration too” , and it evolved into “let’s allow all tool-specific configuration” since “what is a build tool” is an arbitrary and somewhat useless line of argument to have. I don’t think a similar line of argument applies here.
(and yes, with the power of hindsight, I do wish a better job was done of setting community expectations around this new file and the tool table.
)
To use the parallel drawn, even though no one outlawed having tool-specific configuration files, or using non-TOML configuration files, nearly every popular development tool has had its users request/argue for having/contribute pyproject.toml’s [tool]
-based configuration support. The expectation that having this standard would not mean that library/tooling authors will be pushed to adopt it, is at odds with that experience. It’s not a 1:1 situation, obviously, but I think some of the learnings transfer.
PS: The above is all based off of memory, so I might be wrong about some detail on this. It’s been 4-7 years, and I was still a curious teen back when the initial discussions were taking place. Please correct me if I got something wrong here!
I’ll echo this sentiment.
This was my first reaction while reading this PEP as well as the initial discussion here. I think introducing one-more-way, especially as standard syntax/library code, ought to cover why improving existing solutions isn’t a viable option more thoroughly or, at least, have a stronger reasoning for that choice than what’s provided. For me, I’d set the bar at “we tried and it’s not feasible because (list of socio-technical reasons)” but I’m also cognisant that not everyone might think it needs to be that “high”.
To draw (again!) on the referenced parallel of pyproject.toml, the rationale for both PEP 517 and PEP 518 extensively discusses why the options are being considered over the existing solutions. Alternative approaches were considered and tried over the course of years before we got to the point of discussing using a new file for conveying that information.
At the cost of being reductive, the main selling points of this proposal (to me, anyway) seem to be programmatic access to the docstrings and the ability to reuse the arguments.
However, you need to either execute or pseudo-execute (i.e. implement all the type system “magic” of handling certain ifs, try-excepts etc; or, at least, keep logical track of aliases while respecting namespacing) to actually extract accurate type alias information. Compare that to using docstrings, which allows fetching all the information to be done from an AST directly; without needing to do a full execution of the module or handling any of the execution.
Both of these, of course, have a restriction: dynamic logic doesn’t work on either static analysis approaches. Arguably, there’s a tiny amount of dynamism allowed under the type system based model in exchange for significant complexity.
While Sphinx’s built-in autodoc executes code and it’d manage to adapt for this with some tractable reworking, the AST approach is how sphinx-autoapi works. (IIUC, mkdocstrings does this too – don’t quote me on that!) As currently written, this PEP would require AST-based API documentation generation tools to take on the complexity of extracting documentation information from types that might be aliased, and that those aliases might be conditionalized since the type system supports that.
Realistically, IMO this means that AST-based documentation generation tools won’t be supporting this PEP’s proposed model (not fully, anyway).
OTOH, removing support for it (type alias and/or all the conditional magic that come with the type system) would mean that the only thing this provides is an alternative syntax that could live outside the standard library and is a strictly equivalent alternative to standarding on a docstring format ecosystem-wide. It’d drastically weaken the argument for adopting this model as the standard model.
This could be, of course, reduced by adding restrictions on how the TypeAlias
is assigned and managed, but that’s a symptom that we’re not using the right abstraction model here IMO. 
While this isn’t a showstopper issue, it is certainly an argument against the proposed model IMO.
This actually leads me nicely into…
… The open issue about mixing documentation and typing. From the PEP:
It could be argued that this proposal extends the type of information that type annotations carry, the same way as PEP 702 extends them to include deprecation information.
Yes, but it’s not a strong argument IMO. 
To use PEP 702: it discusses why the type system is a good vehicle to provide that information, type checkers are actually using this information, type checkers using that information results in useful behavious and the PEP includes references to prior art in other ecosystems showing that deprecation information leveraging the typing mechanisms isn’t a novel concept.
None of those are true for this PEP, IMO. To be clear: I’m not saying this PEP needs to do the same things or make the same arguments, but that it needs provide a stronger rationale for its design choices (especially, the socio-technical choice of not trying to settle/tackle the lack of standardisation problem).
Given that one of the motivating goals with this PEP is to provide richer errors/IDE experience, with the motivation explicitly calling out lack of ability to syntax check in IDEs, it’s interesting that it doesn’t bless a markup format. By not picking a markup format, we’d be punting the problem to a different layer (no invalid markup stuff in the strings you have in annotated). While “it’s unclear what to pick” is something that I agree with, I think the decision to not pick a markup format is a consequential one.
IMO, it’d be worth calling out in a more visible manner in the rationale (or rejected ideas, or wherever the authors deem appropriate). Even a position like “we don’t think it’s as important to have syntax corrections for the markup in those strings” would be fine here – it’s a judgement call, and I think the PEP makes the right one – but it seems like an important-enough detail for implementors and users to call out more visibly.