Extend type hints to cover exceptions

I dream of the things we could do in a world where this were the base level of digital literacy. Certainly this is one of the great benefits of the open source community, capably outlined by E.S.Raymond.

But the reality is most users of most software are not developers, many wouldn’t know what to do with a stack-trace. Nor does it matter to them beyond “every time I try to X the program crashes”. Even if they report it and it gets fixed at best they have that delay to achieving the desired goal. Depending on the application, any downtime could be severe.

This “it just crashes” is what I think @Numerlor meant by:

Though the same problem is true if the exception is caught at the very top level in order to avoid an outright crash to, say, hold onto a port allocation.

Only if they do wrap it in a generic error they now still have to annotate it as rasing this new error instead, is it not easier for them to simply annotate the function as raising the original error?

I don’t dispute that is is true for possible exceptions, but it has been my experience that the ones that cause problems are explicit exceptions covering obscure edge cases.

Then perhaps you can consider that what I’m hoping to reach is not the traditional static approach error checking but a more pythonic and flexible subset of how other languages have implemented it. Just in the same way that type hinting is a more flexible and pythonic solution to being more sure of type handling.

Yes, I’m suggesting that when letting it pass it does so explicitly, though still without interfering with the “successful” path of logic. Explicit is better than implicit.

It certainly could be many exceptions, but type-aliases already exist to allow functions to accept or return a value that could be of many different types.

Documentation and testing absolutely have their part to play, as it does for any codebase of quality, but so does static-analysis. We don’t try to cover calling a function with every type that exists with unittests but use mypy to check it’s only called consistently with its annotations.

One example has been given of a language that rejected it but I have also given two examples of equally modern languages with much stricter error handling systems both of which are functional and well liked.

I’m not saying this is the be-all and end-all solution or claiming to have some higher level of understanding. I’m observing that a problem exists, I think addressing it would make python a better language overall and wanted to start a conversation about how it could be improved. I offered the approach that seemed to me the most pythonic, using existing python features and consistency with the zen.

If the python community don’t see this as the solution I’d love to see other suggestions presented and to participate in their refinement as I’ve already prompted. Perhaps the community don’t see this as being an issue, rather than simply disagreeing over how to address it?

2 Likes

Applications that don’t crash are the ideal, yes. But bugs do happen, and when they do, it’s better for everyone (users as well as developers) if they are easy to fix.

If an unexpected problem arises, a stack trace may not be great for the user, but it’s essential for the developer to provide a quick fix. A well written application absolutely should capture full stack traces and make them available on a crash. The discussion here is summarising that idea by saying “have a top-level catch-all”. That’s not “let the exception just happen”, nor is it “just spew the traceback out to the user”. It’s writing appropriate error handling code, which might be a dialog box saying “The application hit an unexpected error. Full technical details are in the file xxx, please provide this to the developers when reporting the problem.” You could also suggest possible immediate fixes (such as rerunning, in case it was a transient problem) depending on what you think is appropriate for your users. Or it might be writing the traceback to a server log, and restarting the application, for code that provides a background service. Or something else.

But you need that code in any case. It’s not feasible to check every exception, at every point in the code where it could happen. At a minimum, every function would need to trap OutOfMemory and KeyboardInterrupt, and anything that did IO (including your code to report errors!) would need to anticipate IOError. If you did that, your application logic would be totally obscured behind error handling code, and you’d drastically increase the risk of logic errors - which is far worse for your users. Even Rust has panic and crates like anyhow.

I don’t deny that having programmed in Rust, I miss knowing “what might go wrong” when coding in Python. But I don’t want it built into the language, because Python is a different language. Having it be common for libraries to document expected exceptions would be a much better fit in Python. And would fit Python’s model of “catch what you know you can handle, and let the rest bubble up for a more generic handler to deal with”.

1 Like

Indeed. Or for that matter, ANY TECHNICALLY ADEPT PERSON can use this log file. On multiple occasions, I have (as a third party who happens to know how to read log files) assisted people with OBS Studio issues, simply because OBS retains high-quality log files. Sure, a good few OBS users won’t have any idea what all that text means; but most people are capable of following instructions like “go to Help, Log Files, and select Upload Current Log File, then paste me the link”. This is an excellent middle ground between “only the original dev can use this” and “make sure the end user can understand it”.

2 Likes

Wait, the best examples you can give of languages with checked exceptions are Go and Rust, two languages which don’t have exceptions at all (checked or unchecked)?

Java was released in 1995. Look at the set of languages which have received any functional upgrades since 1995. There are probably dozens of them. How many of them have added checked exceptions? How many languages invented post 1995 have been designed with checked exceptions? Precious few, and none of them are exactly popular.

The fundamental issue here is that checked exceptions are a very unpopular approach. Whenever it has been tried, it has been somewhere between “difficult to use and problematic for developers” to “a disasterous anti-feature” or worse depending on how polite we are being.

“We must do something; this is something, so we must do it!” is not a helpful approach.

Mypy’s static typing is an implementation of gradual typing, which is not just a hand-wavy “let’s make something more flexible!” but is a carefully thought-out and deeply analysed system. Gradual typing was invented in 2006, it is a mature technology used in many languages such as Typescript, Raku (Perl 6) and Dart. Is there anything equivalent to that for checked exceptions? I haven’t seen any evidence of that.

If you have some hard evidence for how checked exceptions can be made to suck less, such as

  • languages which have successfully implemented them (and no, languages that got rid of exceptions altogether do not count);

  • an approach to checked exceptions equivalent to gradual typing;

  • or any hard evidence stronger than wishful thinking that this has got to be better than nothing;

then you should tell us.

Remember the Zen:

Now is better than never.

Although never is often better than *right* now.

We’re still at the “never is better than right now” stage of checked exceptions. If somebody comes up with a less awful system, then that might change. But we need something better than just handwaving.

I think that, short of some new invention or breakthrough, the majority view in computer science and programming is that the problem of dealing with exceptions (if your language has them) is not well suited to static analysis with a type-checker, but better suited to testing.

If you test all the corners of your code, you should experience all the exceptions that can occur. Coverage tools can help you determine which lines of code aren’t tested. Good domain knowledge can help you recognise what corner cases haven’t been tested.

A sufficiently advanced type-checker could recurse through your entire dependency stack to determine the possible raised exceptions, without having any exceptions in annotations except to provide concrete details when inference is blocked by “runtime shenanigans”.

A problem here is built-in (binary extension) functions, which is where the exception-in-annotation makes sense.


I don’t know what would be a good outcome of this analysis: any handling for exceptions not possible to be thrown only end in a never-run path, and any uncaught exceptions are still handled at the top-level (except-hook).


One thing however would be that IDEs could have a better time telling devs which possible exceptions, and provide details on those exceptions: something very difficult (and unlikely) with docstrings.

3 Likes

I’m not arguing in favor of OP but yeah i use that exact pattern at work. We catch any exception, record it, going as far as going up the stack trace to find where in our code the exception originates, as opposed to a 3rd party lib, and move on to processing the next chunk of data.

We do this a means of debugging. We group up all the caught exceptions by the exception type, message and where it originates in our code to do large scale failure groupings to ensure all problems are resolved.

We process a few million monte Carlo trials that accumulates to around 10 terabytes of data over Thursday afternoon through Monday morning every week.

So yeah this pattern is actually incredibly useful. I get the point of the article but there are times and places for this stuff. We can’t easily test around this bc our simulation is our way of driving our engineering design and yeah, sometimes we ef it up which will causes crashes on certain monte carlos due to the randomness.

It’s your job to eff it up.

Embrace that! Tracebacks help you to solve problems, checked exceptions don’t have much of a track record of that.

Then the implementation changes so CoffeePotOutOfCoffeeError is no longer raised, and that makes the viral declaration of the exception type just noise.

For purity, every function would have to declare that it can have exceptions for out of memory, overflow, division by zero, etc., but that wouldn’t be practical.

A common programming pattern is to attempt a retry on any exception, and then reraise after enough attempts. The exception raised should be the original one, although in Python it makes sense to raise MyEx('failed') from e.

Shouldn’t it be a statically-observable error to declare that your function raises an exception that it does not raise?

Shouldn’t it be a statically-observable error to declare that your function raises an exception that it does not raise?

quite correct.

Rust’s use of Result is basically the same thing as checked exceptions in practice. The fact that PyO3 can map transparently from a Rust Result to a Python Exception in both directions when calling from Python to Rust or Rust to Python shows that they are essentially the same. The Rust syntax for doing this is almost just a (vastly more ergonomic!) version of the way that C code handles Python exceptions inside CPython.

What is different though is that in Rust there is also panicking which is an unchecked failure mechanism that is not represented in the type system and is only described in the docs for a function. It is not recommended to use panicking for anything that might be deemed a recoverable error but really just for the case where the best you could do is bomb out with a traceback. When a function returns a checked exception but you do think it should not be possible for the function to fail or that the failure case cannot reasonable be handled then you can .unwrap() to turn the checked exception into an unchecked one.

I think that this distinction between checked ‘recoverable’ errors and unchecked ‘unrecoverable’ errors is what makes Rust’s ‘checked exceptions’ manageable in comparison to Java’s. The way I would map this to Python is as ‘expected’ and ‘unexpected’ exceptions. An expected exception is one that you realistically should always consider like FileNotFoundError but an unexpected exception would be something like mylist[0] raising an IndexError. Analogously the Rust typechecker will allow compiling code that does myvec[0] from an empty list (and will then panic at runtime rather than returning a checked exception) but would force you to handle a checked exception if you tried to open a file.

Somehow a distinction like this between different kinds of exceptions that should or should not be annotated or analysed by the typechecker would be needed to make checked exceptions useful in Python. That means the type checker should not look at every possible error that every operation might raise but instead consider only the ‘expected’ exceptions. Probably that means something like looking at explicit raise statements and exception annotations only. Static analysis could not be expected to prove that some function definitely does not emit other kinds of exceptions but it could verify what expected exceptions might be raised or whether all expected exceptions have been handled.

3 Likes

I guess you could use this lib if you want to use rust-style “exceptions” (never tried it myself, just found it).

This still ignores the fact that tne entire benefit of exceptions is that you DON’T have to catch them the moment you could cause them.

Can we please stop trying to turn exceptions into a bad-performance version of return values?

1 Like

I think we have it in this thread already, and I don’t think it’s especially hard to elucidate.

I can’t believe I’m going to be the one advocating this, because I hate the idea of typing in general already (even more broadly, of trying to fight the type system of whatever language you’re using). But I think this thread is grossly unfair to the proposal. I don’t think it’s very much like Java’s checked exceptions at all.

Yes, an exception annotation that is actually used by a third-party type checker is in some sense a “checked exception”. But going from there to all the bad stuff associated with checked exceptions is a heinous fallacy. Having “checked exceptions” does not entail that the language dictates which exception types must be checked, and does not entail the absence of a compile-time workaround for the exception check - or even a compile-time means to un-check the exception.

(Rather like how “static typing” doesn’t entail manifest typing, nor an absence of implicit casts, nor the implementation of a notion of either covariance or contravariance, nor even basic type safety.)


I’m not surprised that multiple people seized on the same complaint, as it seems pretty central to what people don’t like about existing implementations of checked exceptions.

Fortunately, the fact that the typing system is optional and bolted-on means that there is not any actual issue here.

For example, add to typing.py:

def ignore_exceptions(expr, *excs):
    """Ignore exceptions that were raised in evaluating an expression.
    This returns the value unchanged. To the type checker this signals
    that exceptions declared raisable by the expression are
    unimportant, but at runtime we intentionally don't check anything
    (we want this to be as fast as possible).
    """
    return val

By convention, the type checker could ignore all exceptions if the excs list is empty (since it would be useless otherwise). Exceptions could be allowed to propagate, without propagating annotations. This is a mature-adult decision: the choice to say “I acknowledge that I depend on code that claims it can raise FooError; I consider that the risk is minimal and that advertising it to my own callers is not reflective of the normal workings of the API I’m providing.”

Aside from that: static type checkers in Python aren’t part of the compiler, but they may very well be part of the IDE. In which case, rather than simply complaining about exception annotations not being properly propagated, they could just give you an option to fix the code to propagate them.


But it wouldn’t. It would also consider exception declarations attached to function calls.

You don’t always know what to test. And if the undocumented exceptions are in the piece that you’d normally mock out for testing, good luck.

This is the part where I actually get to an example. A few weeks ago, someone asked on Stack Overflow what exceptions “should” be considered when calling httpx.response.json. As noted, the documentation doesn’t give a proper indication. After reading the source, I concluded that it makes sense to check for JSONDecodeError and UnicodeDecodeError.

Do I need to be told that kind of thing? Arguably, yes. My conclusion from that investigation happened to line up with my common-sense assessment. But on a below-average day, I could easily have overlooked the possibility of UnicodeDecodeError, even though I’m normally the one playing the role of the old curmudgeon grumbling at people to Think about encoding and Stop Pretending that bytes are text because 2.7 is as old as Windows 7 and Unicode is older than either ASCII or EBCDIC were when Unicode was born.

Could I criticize the documentation for failing to cite those exception types? Yes, but. An annotation would equally well document the code, and it would potentially show up in someone’s IDE. Documentation could also be generated from it. It’s a little strange to see people wondering what the value is in having a form of documentation that’s also executable, in the same context where we’re talking about the importance of testing, which the XP folks have always marketed as being exactly that.

Speaking of which: could I determine those types through testing? Maybe. Kind of. If I’m lucky. I’d have to know how to mock just enough of the response to where the library hasn’t attempted either JSON or text decoding yet. Then I’d have to configure the mock in a way that caused those problems to occur. But if I knew how to do these things, I would almost certainly have already guessed the corresponding exception types just by the Feynman algorithm (not the one on Wikipedia).

Could it actually realistically raise either of those exceptions? Sure, pretty decent likelihood. Data retrieved from the Internet is invalid all the time.

Could it also raise TypeError or ValueError? Sure, but this is much more likely to indicate either that I gave it bad arguments, or that there’s an error in its own logic. Similarly, httpx.ResponseNotRead would seem to indicate an error in my own logic.

Could it also raise KeyboardInterrupt or MemoryError? Sure (notwithstanding that a KeyboardInterrupt would be more something that happens during the JSON parsing rather than because of it). But these are problems that I either care about regardless, or don’t care about regardless. They don’t meaningfully have anything to do with my decision to attempt to decode JSON from a web response.

Could it also raise any of countless other things? I’m sure it could, but. I maintain that JSONDecodeError and UnicodeDecodeError are qualitatively different from everything else in this context. They’re the ones that fundamentally are caused by something bad in the request data, rather than in the pre-existing program state.

Are those explicitly raised? JSONDecodeError is, but only indirectly. UnicodeDecodeError is not, unless you count the C source code for the bytes type.

As someone who is writing a function that calls httpx.response.json, is it likely that there’s something meaningful I can do about JSONDecodeError and/or UnicodeDecodeError? My claim is, yes, it’s much more likely for these two than for anything else that might be raised - because of the nature of the circumstances that are liable to cause them.

So this is something where I would think it makes sense for the library function - which is already using type annotations for its parameters and return value (which I personally would just ignore, as I am not using mypy etc.) to be able to declare that it notably raises those two things, in a way that mypy can be aware of. And then, in my own code, I have a choice:

  • I can just not use mypy (the choice I already committed to ahead of time).

  • I can add the appropriate try/except blocks around the call.

  • I can declare that my own function also raises those exception type, notably.

  • I can, perhaps, ask my IDE to automate either of those.

  • I can mix and match the above two.

  • I can politely ask mypy to shut up about it - perhaps with the typing.ignore_exceptions proposed above, or perhaps with some kind of pragma recognized by mypy (but I guess the latter is not favoured by type checker maintainers, or else typing.cast wouldn’t exist in the first place?).

And if I explictly raise something in the same function, I can ask mypy to hold me accountable for that, and document it - I probably wrote it for a good reason. On the other hand, if I decode bytes to string, maybe I don’t want mypy to nag me about that, because maybe I control the data and am confident about its encoding. If that’s not the case, I would still be able to volunteer such a declaration.

This is not at all like Java’s approach. We aren’t, for example, arbitrarily deciding that RuntimeError, TypeError, ValueError and SyntaxError (or some other set) are “free”, and everything else has to be checked. We aren’t preventing compilation if the checks aren’t satisfied, unless we give a static type-checker that authority explicitly. We’re allowing ourselves to ignore the red flags, with varying levels of control. We’re admitting the possibility that the library code itself ignored some red flag for a good reason (say, it knows about the ResponseNotRead possibility, but chooses not to propagate that to us from this part of the API).


Yes, you would lose the ability to advertise the database connection error though len. You would still have more information signalling than otherwise - in particular, the ability to advertise the possibility of TypeError from a non-iterable. (As an aside, I would not be very happy to find out that a library type was depending on a network connection for __len__. That’s normally something that’s supposed to be either cached or trivially calculable.)

Alternately: the subtype can perfectly well have an additional notation, and if the type can be statically proven (or cast), then the additional possibility is surfaced.


In principle, I’m sure. Sounds like it would be slow, and overkill. You do raise a good point about the C-Python boundary. But I don’t think I’d want the Python interfaces to, say, builtin types to report those exceptions automatically. Maybe I’m wrong. I wouldn’t use the feature anyway (I just think it’s an entirely reasonable idea now that the typing worm-can is open), so that’s for others to vote on.


I don’t see why. Will mypy complain about

def example() -> Optional[str]:
    pass

? That declares that it could return a string instead of None, but it won’t. It seems bad for a type checker to complain about that. What if example is implementing a pre-existing interface?

7 Likes

FUNDAMENTAL RULE: Your job is not to stop exceptions from happening. If you start out by thinking “every exception that could happen has to be caught, and if I don’t catch it, I am a bad programmer”, then of course you are going to want to be told exactly which exceptions can happen. But that is simply the wrong attitude to exceptions. It won’t ever help you.

Write your code. Then see what problems you can actually handle, and handle those. ANYTHING else - whether it’s TypeError, UnicodeDecodeError, or a segfault caused by the OOM killer - will fall into one of these categories:

  1. Stuff you actually can handle, and need to. Cool! Now that you know about it, add an exception handler.
  2. Stuff you can’t handle, but you need generic handling of all exceptions (usually “log this and go back to the main loop”). Cool! Easy. Add generic handling.
  3. Irrelevant stuff like “the computer might have a CPU with a backdoor installed by Martians to try to steal Earthling technology”. Sure it could happen, and it might break your code, but there is literally nothing you will ever be able to do about it, so take no notice.

And, importantly: Assume everything is in category 3 unless you have evidence otherwise.

So, no: we do not have it in this thread already. Your example is yet another case from the same justification of “I got an exception and I wish I’d known in advance to catch it”. Which isn’t actually helpful. And this right here is precisely the problem of checked exceptions:

That’s exactly what we most certainly do NOT want to end up with. Exceptions exist to save us from having to check all return codes everywhere. A system like you’re writing here turns exceptions back into return codes.

It’s functionally equivalent. You’re getting rid of the RuntimeError loophole but otherwise it’s the exact same system. I don’t see that this is an advantage.

I’d agree with you; it shouldn’t be an error to have a potential return (including an exception) that can never eventuate. It could be a stub that hasn’t yet been written, it could be implementing a standardized interface, it could be any number of things, and “always returns None” is definitely valid here.

(It MIGHT be worth having a warning, though I personally think not. It definitely shouldn’t be an error.)

2 Likes

There is a fifth possibility not mentioned which is to allow the type checker to infer what is raised from one function by seeing what other functions it calls. It is not necessary for code to either handle exceptions or declare that it raises exceptions because the checker can infer that. The checker should only check when a raises annotation is used explicitly. You don’t need to handle all error codes if you are do not make any claim about what is or is not raised.

This could look like:

# This is a function that might raise an
# exception. There is no explicit raise statement
# but we annotate the possibility of the exception
# explicitly so the checker knows that this
# possibility should be checked when it is asked.
def func_c(x: int) -> float:
    # raises ZeroDivisionError
    return 1/x

# This is a placeholder for many intermediate
# functions because our func_a1 etc functions do
# not call func_c directly. These functions do not
# need to have any explicit raises annotation
# because the checker can infer what is possibly
# raised from seeing what is called.
def func_b(x: int) -> float:
    return func_c(x)

# Checker accepts this because it correctly
# reports that ZeroDivisionError might be raised.
def func_a1(x: int):
    # raises ZeroDivisionError
    return func_b(x)

# Checker accepts this because ZeroDivisionError
# is handled.
def func_a2(x: int) -> float:
    # raises None
    try:
        return func_b(x)
    except ZeroDivisionError:
        return 0.0

# Checker accepts this but can infer that func_a3
# might raise ZeroDivisionError even if that is
# not explicitly stated.
def func_a3(x: int) -> float:
    return func_b(x)

# Checker rejects this. It claims not to raise
# anything but calls a function that raises and
# the exception is not handled anywhere
def func_a4(x: int):
    # raises None
    return func_b(x)

# Checker accepts this because we told it to
# ignore the possible exception. We believe that
# we know better than the checker that the
# exception would never be raised and so we
# explain that to the checker. It is possible that
# we might be wrong about that but if we are then
# that means there is a bug somewhere and it is
# acceptable to fallback on printing/logging the
# traceback at runtime. Something better than
# type: ignore could be used for this.
def func_a5(x: int):
    # raises None  : type: ignore
    if x == 0:
        return 0.0
    else:
        return func_b(x)

Importantly note that func_a3 is accepted but func_a4 is rejected by the checker. This is because the raises annotation is not required so func_a3 can happily just not mention the possibility of raising. The checker would only reject a raises annotation if an annotation is actually given. If not given then the checker just understands that the function possibly raises but does not need to require the programmer to state that explicitly.

In practice what this would mean in a large codebase is that there would be certain places where you know relevant exceptions can be raised and you annotate those explicitly. Then there would be certain other places where you want the exceptions to be both documented and checked (e.g. public API for a library). In between most functions would not need to declare that they do or do not raise anything although there might be a few places that do the analogue of func_a5 or func_a2 which can wrap a function that raises to make a function that does not raise.

Of course in the event of bugs there might be other cases where exceptions are raised that differ from what is claimed by the raises annotation. Those can clearly be understood as bugs though and in those cases the best course of action much of the time is just to allow the exception to propagate to top-level and be handled by the logger or print the traceback etc.

1 Like

That’s the point: the purpose of this system is to attain that evidence from the library author.

In the case of the httpx example, the author of that code would know that JSONDecodeError and UnicodeDecodeError are likely to be relevant, because again they are qualitatively different: there is a clear reason one can point at for their cause, and that cause is related to data that was just now introduced to the program. That is evidence that the author has, that those exceptions are likely to fall into either case 1 or case 2 for the client.

No, not even remotely. I listed multiple options, you trimmed off the ones that are “literally ignore the information that was offered” (silencing the type checker or not using it at all), and are now talking about what the user would “have to” do. And even then, the only thing it has in common with return codes is the completely voluntary, opt-in system for propagating the type information, not the actual data (return value or exception object). It doesn’t, in any way, cause people to have to propagate something through their system that was advertised by a third-party library. They can choose to ignore the type information in the code that interfaces to the library, without suppressing the exception. And then callers in their own code require no alteration, and still potentially receive those exceptions.

I think I very clearly explained many ways that it’s different. In this system, nothing is enforced by the compiler at all. Even the type checker is not offering any opinion that certain exceptions should be “checked” and others “unchecked”. It is taking its cues solely from a) other type annotations and b) explicit uses of raise. And then the caller is still free to either use or not use the information. Exactly because this isn’t a built-in feature of the type system, it’s not limited to the options “handle the exception” or “propagate the exception type information”.

But it is more useful than a comment that says “hey, this can raise a FooError by the way” because tools can check it if you want them to (and it’s in a standard format that is itself syntax-checked). In particular, IDEs can directly inform you about this specific, salient bit from the would-otherwise-be-documentation, rather than you needing to consider that the documentation might have this important thing to say and thus looking it up.

That is simply not true. For clarity, the other two options are:

  1. Stuff you actually can handle, and need to. Cool! Now that you know about it, add an exception handler.
  2. Stuff you can’t handle, but you need generic handling of all exceptions (usually “log this and go back to the main loop”). Cool! Easy. Add generic handling.

The library author DOES NOT KNOW what you are capable of handling. Only you do. There is absolutely no way for the library author to be able to tell you this in any way. The author cannot possibly know what you would do if you are attempting to read something and you get a UnicodeDecodeError. Maybe it’s something you can handle (because you know that you have a server that sends back “sometimes UTF-8, sometimes ISO-8859-1”, and you have to cope with that). Maybe it’s something you can generically handle, but even then, the library CANNOT KNOW how to go back to your main loop - only you do. The library author MUST assume that these are category 3 exceptions: stuff you don’t expect to happen and can’t handle. That is why they are being raised as exceptions in the first place (with a very very few exceptions like dunder functions that could return any object whatsoever, and thus must raise an exception to signal the non-return case).

Do you not understand this? I don’t think there’s much point continuing the discussion here if you don’t.

I wrote out a reply, but I think this will get too heated following the current thread of discussion. I think it will be easier if I just write out my proposal, concretely, in a new thread.

2 Likes

Great summary and helpful links. Thanks.