Add an else clause to with statements to detect early termination

I think that’s also really the core for me. If I would see that some external lib is doing this (have a public API with context managers that suppress exceptions) - that would be a huge red flag.

Anyway - I’ll have a look at your earlier links…

1 Like

You seem to be changing the scenario here. I thought you were talking about a case (the code you quoted) where there was suppression that you claim should have happened, but the problem was that you didn’t know it had happened. But now you’re saying suppression shouldn’t have happened.

In your example code, what exception is getting suppressed to cause the NameError that you’re seeing? Or is this all just a theoretical issue because pyright told you that you might see a NameError? If so, then I suggest that rather than immediately leaping to asking for a change to Python, you might be better identifying what the actual problem is here that pyright thinks might occur - i.e., what exception is getting raised and suppressed, and what conditions cause this to happen. With that information, you can work out better what options exist for solving the issue.

And if you can’t work this stuff out, either because the library isn’t documented in enough detail, or because you don’t have the time, or for any other reason, why not assume that this is a false positive from pyright and in fact nothing will go wrong in normal use cases. After all, your tests are working. I assume you do have tests, and aren’t just relying on “if it type checks, it’s right” - Python isn’t Rust or Haskell, after all :slightly_smiling_face:

I’m not saying that there might not be a real problem here to solve, just that it’s not clear what that problem is, and “the type checker told me there might be an issue” isn’t really enough to go on.

1 Like

Ok, so I partially agree with you. Just by inspecting the code it’s very unclear if out_avals and built will always be bound variables at runtime. It seems that extend_axis_env will not suppress exceptions, so they should always be bound, but it’s difficult to tell for sure, without really digging deep…

But… the question whether they are bound or not also does not seem very relevant. If I disregard other possible exceptions, then:

  • If core.extend_axis_env suppresses exceptions, then out_avals will be bound.
  • If it doesn’t, and raises an exception, then you will get an exception at runtime, before the code that uses out_avals is reached.
  • If it doesn’t suppress them and there are no exceptions, then of course there is also no problem.

So, in this case, it also might not make much sense to add any assertions immediately after the with ExitStack().

1 Like

Because normal Python code doesn’t assert on everything, or try to statically guarantee that no bugs can occur. As I said, Python isn’t Rust or Haskell, and testing is our primary tool for confirming code correctness. Type checkers and linters are useful to support that basic approach, but they are not (and never can be) 100% foolproof in a language as dynamic as Python.


A comment: If we take the type checker warning religiously seriously, then I think the result is that we are never allowed to define attributes in a with... block or in a try... block unless the variable is also defined in the except... block because in these cases it’s always possible an exception could be raised that exits the block early. I’ve personally refactored code in the try... except... block to try to not define variables in these blocks and it usually results in tighter-scoped control flows which I would say is a code improvement. I’m not sure if that makes sense in the specific case here or in general.

Is my understanding pretty much correct here?

1 Like

Nor can they in a statically typed language for anything nontrivial. I write a lot code where the types are clear but I am much more concerned about the values. Confirming that a function returns an int is nice but out of all of the infinitely many integers I usually have a strong preference about which particular int is returned.


I’ve been arguing for this as an assertion to guarantee that the suppression that shouldn’t have happened, didn’t happen.

As I already said, the point is that it’s very hard to determine which exceptions may be raised and which may be suppressed. We just want to assert that there’s no suppression without spending hours investigating what the code actually does. Moreover, code can change and our investigation conclusions may be reversed. Having assertions is therefore better because:

  • it continues to guarantee assumptions even when code changes, and
  • they take no time to add compared with investigation of exceptions raised and suppressed.

Because I think it’s better to check assumptions with assertions than to assume that errors won’t happen.

I think this is hyperbolic. Static checks improve code correctness. I realize that you don’t use type checking by your own admission in many threads, but arguing that “Python isn’t Haskell” is not a good reason to turn your nose up at static checkers.

The problem is that we want to check assumptions about code statically (to benefit our type checkers) and dynamically (to discover errors as soon as possible, and with clear messages). Without this assertion, the type checker errors have to be suppressed or disabled, and the runtime behavior will lead to cryptic name errors rather than obvious runtime errors.

It’s a bit broader than that. If the except block raises unconditionally, that would be okay too.

Yes, totally fair point. On the other hand, Python static type checking is verifying more and more, and that is putting pressure on code to guarantee more preconditions. I understand wanting to push back against this, but there’s pressure on both sides, which is I think the heart of our disagreement.

Probably, we can leave it here then? I haven’t been able to convince you and that’s fine. I imagine this will come up again over the next couple years as more people get these errors and reach the same conclusions as I did.

1 Like

That’s a bit too much of a shorthand IMO. Standard rule of programming: Bugs happen. Our job as programmers is to discover those bugs before they become a problem, and editor integrations, static checks, etc, etc, etc, are all tools that can help us to find and fix bugs.

Static checks are just a tool. They don’t inherently improve code correctness. The only way they help is if they show you something that you wouldn’t otherwise have seen until later. And that benefit has to be counterbalanced by whatever costs you have from warping your code around silencing the false warnings, thus weakening the value of other tools like code review.

That’s why static checkers ALWAYS need a way of quickly and easily saying “shut up, checker, I know what I’m doing”. And in this particular instance, I agree with those who have already said that this would be the best solution here. The static checker is simply not helping with this exact case, and it’s best to silence it and move on.


I agree with this description of tradeoffs, but I don’t agree that this is a “false warning”. You don’t know that there are no suppressed exceptions. Therefore, it would be better in my opinion to assert that. Doing that assertion will both shut the type checker up as well as verify your assumption.

Just because you’re used to making an assumption and you think that assumption is almost always reasonable (I agree that it is), it doesn’t mean that code isn’t improved by checking that assumption.

But I can understand the feeling that such a check “pollutes” code if that check is extremely unlikely to catch anything—and I admit that it is.

So this is a tradeoff: more checks or fewer checks.

1 Like

I’m hesitant to add to such a heated thread, but I feel nerd-sniped. I’m probably just going to summarize the problem for posterity and for myself.

The core behavior here that is causing some problems is that with gives __exit__ control over whether the with statement as a whole “succeeds” even though an exception was raised in its block. In particular, if __exit__ is handed an exception, normally the with statement re-raises the exception after __exit__ returns. However, if __exit__ returns a truthy value, the exception is not re-raised. (See The with statement in the language reference, point 7, second paragraph.) Thus, we have this problem:

with cm():
    x = f()

This will raise NameError if f() raises an exception and cm’s __exit__ returns True.

Type checkers that are interested in reporting whether print(x) can be reached while x is undefined will pay attention to the type of the __exit__ method of the context manager: If it is declared using -> None they assume x is always defined, since if f() raises, the exception will be re-raised after __exit__ returns. However, if __exit__ is declared using -> bool (or -> bool|None), they see this as a signal that __exit__ may return True and suppress the exception when the with statement is complete, so print(x) may potentially try to print an undefined variable.

This is why ExitStack is being picked on: its __exit__ is declared as -> bool (in typeshed) because, indeed, under certain circumstances it may pass along the True returned by one of the context managers that were added to its stack. (The code is complicated so I didn’t try to understand what happens if there are multiple context managers on its stack that may or may not return True; but this doesn’t matter for the problem caused by ExitStack.)

The unfortunate thing is that some users may want to use ExitStack but only pass it context managers whose __exit__ always returns None/False. In this case the user knows that this particular ExitStack instance will never return True from its __exit__, thus never suppressing exceptions, thus never causing print(x) to be reached without x being defined. But the type checker doesn’t know all this about ExitStack – it only knows that its __exit__ is declared using -> bool so it will warn that print(x) may find x undefined.

One possible solution (which I haven’t seen yet, but maybe I missed it) would be to have a variant of ExitStack, e.g. NonSuppressingExitStack, whose __exit__ never returns True. Its __exit__ is declared using -> None, and if you pass it a context manager whose __exit__ is declared using -> bool that’s a type error. More over, if at runtime it encounters a context manager whose __exit__ returns True, it does something to tell the user about the problem – maybe it logs a warning, or maybe it raises a different exception (I haven’t thought about this enough to have a solid opionion).

Independently, maybe we could have an else clause on with statements. But what would it mean? IIUC @NeilGirdhar’s proposal is to have it execute only in the situation where __exit__ is passed an exception and returns True in order to suppress it. IMO this would be pretty confusing, since the other compound statement that is most similar to with, i.e. try, already has an else clause that pretty much means the opposite (it is executed only if there are no exceptions). So I am not in favor of that. (And that means maybe I’m barking up the wrong thread?)


What’s really needed here is for ExitStack to somehow say “my return type is the union of all my underlying return types”. But I have absolutely zero idea of how that could be implemented :smiley:


Just for the curious, I think that would probably require this:

…that’s about where my type theory knowledge ends, but I do find it interesting to learn about. :slightly_smiling_face:

Yeah, with every call to enter_context() updating the effective type of the context manager. But I suspect that this is getting beyond the realm of “type checking” and edging towards more complex static analysis systems (see eg Coverity, which traces different code paths in detail and can determine if there’s one specific path that can lead to problems). For example, should this trip a type error?

with ExitStack() as ctx:
    if random.randrange(2):
        x = 1

Okay, this is a faintly ridiculous example, but it highlights the inherent problem. Personally, I would be fine with the “type” of this being “yes, this is capable of suppressing exceptions” - anything else is asking too much.

1 Like

I think the problem with that is that the enter_context() calls are dynamic. If the underlying context managers were present in the ExitStack constructor, you could probably do this with typevars. But here we’re calling cm.enter_context(subcm), and we’d want those calls to affect the type of cm. That makes me think of how (in mypy, at least) we have a special case for

xs = []
for ...:
    x = ...

where the inferred type for x will affect the type inferred for xs – this is hard, and except for the very simplest cases we require you to provide the answer by starting with

xs: list[SomeType] = []

so that the type of x is checked against SomeType instead of changing it.

If the answer was dependent types, since we don’t have those, we’d end up needing my proposed solution anyway.

Also, the enter_context() calls might occur inside some other thing that is called, e.g.

with ExitStack() as cm:

where foo() is something like this:

def foo(cm: ExitStack):

Well, maybe this could be solved by making ExitStack generic:

class ExitStack[T: bool|None]:
    def __exit__(self, ...) -> T: ...

If this could work we’d be able to say

type NonSuppressingExitStack = ExitStack[None]

Does anyone have examples of a context manager suppressing exceptions? When would that be a good idea?


I think it’s one of the more questionable ideas from PEP 343. Maybe the PEP has a good use case though?

See e.g. pytest’s raises (or stdlib contextlib.supress):


Lots of popular third party libraries seem to be using contextlib.suppress which is doing exactly this, right?

Looks like it’s also very handy for debugging, too.

The difference there is that the programmer is declaring they don’t care about any exceptions – sort of like a try: ... / except: pass.

1 Like

Wondering what @NeilGirdhar thinks of my proposal.