Introduce Result class to allow a modern representation of failure cases

A quick check would show that even some of the examples given in this thread are incompatible with eachother in interface.

requests, which was an example provided in favor of this, uses a mix of exceptions and objects which hold failure, but can turn them into exceptions.

The only place it holds failures is HTTP response codes that indicate an HTTP error, but not an error within requests. These are promoted to full exceptions upon user request (Response.raise_for_status()) which can be called unconditionally for those that want to treat HTTP error codes as application errors.

Requests appears to do this not to use result-like objects, but to differentiate between places in which an exception could occur internally and allow the two to be handled differently despite originating from the same user-exposed call-site.

5 Likes

I still think requests is a very bad example, and I don’t want fixation on it to dominate this thread. The boundary between protocol errors (4xx, 5xx) and non-protocol errors (timeouts, etc) is very clear and intentional.


I’m curious about the possibility of treating the .result(), .exception() protocol as something which could be better promoted and promulgated via a sugar.

e.g.,

future = some_complex_operation()
result = future?.data.foo

which raises if future.exception() is populated.

To be clear, I know that this isn’t well-enough formed to propose yet.

1 Like

The thing about result types is that they are supposed to act like wrappers. Right now if you want to promote an error to an exception from one of these libraries then you need to know how to consume the error from the application since each one will return it’s own custom class. There is nothing in common to know if each one is in an error state.

These means you need to know how to read that error or find the API like raise_for_status that does this for you - This is extra code that maybe isn’t needed (Either for you or library)

If your code simply just wants to know if the transaction was successful or not, then all the results object lets you know this without caring about the details. Then once you know it had failed, then you reach in and read the details from the specific object tasked with containing the details for the error (For requests this object would have attributes specific for HTTP codes)

It’s like with Futures, a future will let you know if an async task failed or not without caring about the details. However it’s essentially a wrapper, libraries can still represent how it is they failed by passing their own exceptions.

The other reason they do this is that as I showed above it’s easier to collect the responses in an array (failed or not) if you return a results object than with throwing an exception. This has become much easier now they are moving towards async APIs thanks to Futures and so results would unlock this for sync apis without the need to reinvent.

I don’t really see the benefit to this syntax in python

In the status quo with exceptions, and in the case where you want errors to just propagate for someone else to handle it, this already has that behavior with exceptions.

value = some_complex_operation().data.foo

Without it being a language design choice where Everything that has ever and will ever exists uses this, you’re always going to need to do this. Result objects aren’t magic, and they only seem as seamless in the languages that use them consistently because of their consistent use and additional syntax/compiler features that enhance them.

“I don’t want to read the documentation for libraries I use” is not a very compelling reason.

Most async apis don’t expose futures directly to end users, and IMO that’s a good thing. futures are a low level tool, and the reason they work the way they do is the underlying eventloop as a runtime within a runtime. asyncio explicitly guides end users towards the higher level APIs for a reason, the lower level ones have some differences that only people writing particularly low-level asyncio library code should usually worry about.

You can also collect failures with exception groups already.

There are even much saner async event loops like trio that don’t have futures at all. (If we had time travel I’d much rather asyncio with its tasks and futures was never added to the stdlib cementing its horrible APIs forever as the default implementation)

1 Like

Exception groups aren’t a replacement for this, if you want to build an array of results and then log the ones which failed with contextual information the low level Exception might lack. But you want your code to try and process as many as possible.

This is FAR easier in async code in python, let me show you

items =  [ "jack", "bob", "jeff"]
processed_items = map(async_operations_that_might_fail, items)

await futures.wait(processed_items)
failed_items = filter(lambda i: i.exception is not None, processed_items)
raise [ MySpecialExceptionWithMoreDetails(i.exception) for i in failed_items)

You cannot do this in sync code without basically recreating the concept of a future/result:

class Result:
    value = none
    exception = None

process_items = []

for item in items:
   try:
      process_items,append(Result(value=operation_that_might_fail(item))
  except Exception as e:
     process_items,append(Result(exception=operation_that_might_fail(item))

Exception groups of course allow you to throw multiple but if you want to preprocess these errors to maybe add more contextual information the Exception that “operation_that_might_fail” may lack then you essentially need to keep track of these errors somewhere. Either in an array or a context manger which gathers it all up. This adds noise to the code that doesn’t need to exist and complexity as now you need to know what that decorator does.

There is no reason why we couldn’t just delete the “wait” from the async code and have it work the same way if we had result types:

items =  [ "jack", "bob", "jeff"]
processed_items = map(sync_operations_that_might_fail, items)

failed_items = filter(lambda i: i.exception is not None, processed_items)
raise [ MySpecialExceptionWithMoreDetails(i.exception) for i in failed_items)

It would even work with Exception Groups, demonstrating it’s a complementary feature not a replacement.

Aside from this if we want to introduce null operator “object?.with?.no?.name” a standard Result type is a must, if we tried to add that today we have two choices

  1. We return None and then developer has to work out which key is the cause
  2. We throw an exception therefore defeating the purpose of having a “?” in the first place

The third option would be of course to return a Result object with either the requested value or an Error desorbing which part of the chain is missing the key.

But if we start debating these null operations we will be here forever, so I hope the first bit of this comment will at least show it would be useful outside of that specific thing

You’ve significantly overcomplicated this, and the majority of the python code I write nowadays is async aware, I’m quite familiar with the low level and high level options available here.

However, I don’t think synchronous code should continue in scope with unhandled errors. The loop with try/except is preferable here. It’s synchronous code, you should be handling the error before continuing. (Handling can be determining it’s not an error that impacts the remaining work and deferring handling, but you should actually determine that before continuing)

Yes but here is the thing, my async code could also have the exact same issue could it not ? I could have fired off 100 API requests and be waiting for them all to complete. Even though they are all guaranteed to fail.

You are arguing I should be handling them as they come back to see if I should cancel the other requests. Which to me sounds a lot like we should be removing the ability to wait for all the futures to complete. Functionally there would be no difference with this async code running with 1 thread and my sync code example

And if you accept that it’s sometimes acceptable to do some work before processing the results based on this, then why is it suddenly bad that we want ability to do this with sync code ?

It’s not like the errors would be unhandled ? They cannot use the result itself since it’s just a container, If they tried to reach in and grab the value of the Result, it can throw the underlying Exception just like it does with Future to stop them from not handling it.

It would quite simply be functionally the same as using Future with ThreadPoolWorker(workers=1) but without needing to pull in an entire async library.

Somewhat actually. Async code like you are describing is a great way to get IP banned for not handling 429s (The way of handling this is slightly out of scope for this thread, but firing off 100 requests simultaneously without any handling to the same remote is generally not a good idea to begin with)

Good error handling is a lot more complex than what most people do. I see this as a step away from the right direction on this.

To be clear, Neither Result Types nor exceptions are inherently superior from a language design standpoint, but intentionally introducing a pattern with the intent to use it to not handle errors at the earliest point they can be handled is where I see this moving further from good error handling.

I do also think that having result types an exceptions side by side increases the barrier to proper error handling being seen as anything other than a chore. There’s already a lot of code that only properly handles the happy path.

3 Likes

I think there are some misunderstanding here, so let me summarise why I think a Result type is actually a way of ensuring the unhappy path is handled

In your opinion a “try now, catch now” approach is more desirable because it may prevent an application becoming unstable by handling errors at the immediate opportunity such as Timeout errors etc

Therefore in order to make sure this happens, your preference is to always use and recommend others use Exceptions.

However there are many situations where a “try now, catch later” approach makes more sense which for clarity is not the same as a “try now, ignore forever” approach with silencing errors

There have been cases for example with database migrations where I need to process what it can and then tell me what it couldn’t migrate at the end. It’s a better experience if I see the logs at the end, so because python is “try now, catch now” by default I have to code in the logic monitor what failed and let me know

To be clear these migration errors are just recoverable errors such as it couldn’t find an database entry rather than something like corrupted data

These migrations are isolated per database entry so when a specific entry fails to be migrated we handle it by halting any further logic that may be unstable

But we do not prevent any further of otherwise uneffected

As we said before we do not have a Time Machine, and unfortunately it’s for this kind of reason many libraries feel some results from their API do not need to be handled straight away and instead would fit a “try now catch later “ approach

I’m not saying it this is right or wrong, all I’m saying is it exists. We can’t undo it nor prevent libraries from continuing to do it

What we can do is create a framework around it to make it a more productive experience

For example you may think this code looks correct

try:
  api = requests.get()
  message = api.json.message
except APIError:
  # do stuff

But unfortunately you would be wrong, firstly this API returns a result object which represented a unhappy path since it subscribes to try now, catch later

The HTPP body has a field “message” containing an error message but so does the API response when it’s successful so this doesn’t fail when it should

The only way to handle this error is to understand the specific API for this bespoke Result type either by understanding what it means to be in an error and analysing those fields or using any APIs to convert this to be an exception

This means developers need to learn how to make sure they are ensuring every library they use are having their unhappy paths handled immediately when it doesn’t use exceptions

Futures and Result types don’t have this problem, the API hands you a contain object

What your interested in is contained within it

try:
  api = requests.get()
  message = api.value.json.message
except APIError:
  # do stuff

If we do this to grab the value of its result then just like with futures if there is an error then it should throw an Exception indicating we’ve tried to access an unhandled error

This protects us from the code above, this semantics allow these APIs to keep their try now, catch later mentality by ensuring you catch the error the moment you attempt to consume the result of a failed operation

Just like with futures you can just reach in to grab the exception

  api = requests.get()

 if api.error:
    message =“error loading”

This error would be specific to the needs of the particular API, and can be handled the usual way with try catch, or by reading the error property like with futures

Now this isn’t a guarantee that a bloody minded person wouldn’t intentionally ignore the unhappy path and find ways to silence these exceptions

But that’s the same as now with futures and exceptions and hard to fix without becoming unpleasant and having to annotate your code with all the exceptions it could ever throw

What this unlocks is a proper framework around try now, catch later for sync code and not just async code

It will still force you to consider if you are letting errors slip through,via providing a consistent interface between libraries that are already providing their own result types

Preventing times you forgot to use their API correctly are providing your application with invalid data

And it unlocks paradigms such as the null operator where the developer doesn’t want to handle the error for accessing nested keys immediately but only when they attempt to consume it

name = object?.name
print(name.value) //“crash!”

In some ways you can think of it like an exception generator. The compute has happened but the exception throwing hasn’t yet

Please don’t a result types to the standard library as an alternative to exceptions. We don’t need another way to do things here.

If things should error, they should error. If things don’t know if they should error and have a returnable value, they should return that value or state and let the caller decide.

If things shouldn’t error, they shouldn’t error.

requests’s api is not something people should mimic. Response.raise_for_status() is horrible. requests successfully made the request, it has valid response data for you. If you decide to error, you should be doing that, not requests, and you can do it based on the value, you don’t need a result type for this.

That’s not correct. Futures and Result types without syntax support are just another return type that people have to learn.

@mikeshardmind brought this up already, and it’s hard to read your response as being in good faith.

This also appears to misrepresent someone else and directly state that someone has an opinion that hasn’t been expressed here.

Or an attempt to side-step reasons this was a bad idea in Introducing a Safe Navigation Operator in Python - #157 by Liz but frame it differently so that people might not see that it still has the same problems, just expressed differently

3 Likes

That’s not an accurate summary of my opinion. Errors should be handled at the earliest point in time at which there is enough information to handle them. That’s different from immediately.

That’s not inherently an inaccurate description of my preference in a vacuum, but you go on to use this as if it means something to me which it does not, so… I’m going to say it’s inaccurate.

Use exceptions when appropriate, return data when you have it. Let each layer of the code decide what is and isn’t valid program state, but never represent invalid program state with the same objects you use for valid program state.

Intentionally providing a gotcha with a constructed example without the context for it to be evaluated isn’t useful to discussion…

Yes? If you’re writing a wrapper for an API, you get to decide what is and isn’t valid data. If an API wrapper is exposing this to you, then it might be a leaky abstraction that would have been better served by not giving you back the bespoke result type, but always giving you back valid data or throwing an exception.

This isn’t true, as I’ve brought up and has been reiterated by others both in this, and in the thread it split off of. If you return a Future, users still need to know it’s a future and use it appropriately. The Same holds for a Response, or even a (Value | None, Exception | None) tuple. You always have the responsibility of knowing the behavior of functions you call.

The point is as much as people do not want requests to do this pattern I can point to about fifty other libraries that do this, AWS’s boto SDK for example also does this custom Result type pattern too.

Meaning in my code I have had to implement boiler plate for handling error 500s for all my requests from the requests libraries, boto3 and many other libraries.

Of course people need to read the documentation to understand the return type, but this is about providing a consistent way of handling errors for APIs so that they don’t need to have to think as much when reading it (Oh it’s a result type I already know the API vs whats a HttpResponse and when writing code for a BotoResponse, essentially a wrapper for AWS SDK< will they both have a status_code ?). This means that there have been errors I’ve seen in production that were caused by valid program state errors slipping through because the only valid way to make sure these are handled is Exceptions, thats the only tool python had to enforce error handling.

And if an API doesn’t use that for valid program state you have to hope your team didn’t miss anything.

This currently require customers to write a bunch of custom code to then expose those errors as exceptions. You end up with a lot of boilerplate, you lose the ability to store the results in a meaningful way that doesn’t end up with a lot of complicated code even using exception groups.

For boto for example I’ve had to written code like this:

try:
   response = lambda.invoke()

   if response.StatusCode = 500:
     handleError()

   print(response.Payload)
except BotoException as e
   handleException()

In both cases these are unrecoverable errors for this code path in my application and it seems crazy that I either need to duplicate alot of my error handling code or somehow wrap it up in an custom exception just to say it failed and hope everywhere in my app which uses lambda,.invoke does the same instead it would make more sense to be able to do this.


try:
 response = lambda.invoke()
 print(response.value.Payload)
except BotoError, BotoException as e: # In my case these are both are exceptions from POV of my app
  handleError()

Futures already solve these problems, they aren’t unique to python. Javascript has promises too. Futures are basically just Results but for async. Once APIs adopted these constructs it made error handling easier IMHO.

If you decide to handle what the SDK thinks might be an error you can still do so, it’s very easy to inspect a Future or Result. Decide I don’t think thats an error, one of the reasons these APIs return these result types is exactly so you can store the result in an array in a way thats harder with exceptions without losing the context (you need to wrap the request, the capture the exception then add the exception along side the details in an array)

This is exactly what Futures and Results let you do with a lot less boilerplate than exceptions.

This is the other reason other languages have adopted Results, is it allows them to represent something that is valid program state but not a success i.e This isn’t an exceptional situtation BUT doesn’t mean you shouldn’t have implemented an unhappy path. Even if the unhappy path is to do nothing.

There is a reason all of these systems have implemented this AND have a way of raising exceptions.

It was considered a bad idea because the “?” operator would have returned None, you would have had no idea which key it was

object?.name?.here

If name was None, object was None or here was None. Throwing an exception defeats the whole purpose of a “?.” therefore if a ?. returned a result type of which one possible value was an Errored Result containing an exception describing which key it was

Meaning you could do this

name = object?.user?.name

try:
  print(name.value) # Throws KeyError if Result is in error state otherwise you get the value
except KeyError as e:
  print("Could not decode username: {e}")

And adding another way to do it would fix this for you? Leaving off for a moment that I don’t agree with your assessment of the situation, would adding an additional method of doing this actually change existing code you are frustrated with that doesn’t need this to be less frustrating, or would it just add your preferred way, and all of the existing code would continue to exist, making other people need yet one more way they then need to know how to handle?

I’m aware of what futures are, they also don’t do what you claim. There’s actually a current proposal in JS to wrap exceptions in results because futures and promises aren’t that.

I don’t think they solve this any differently. They have two states possible states, success and failure. This is the same as hitting or not hitting an exception, the exception can hold data. In fact, if you do

value = await some_asyncio_future

you might have an exception raised

The fact that the exception is currently in a suspended state waiting to be fetched doesn’t make them not an exception.

2 Likes

pedantically, to avoid any future argument on technicalities, they have have more internal states than this and it’s clear that you are referring to states when finished and available to a consumer.

Beyond that, what you said about just ensuring you only ever return valid program state or raise an exception rings correct to me, and it seems like the issue here is libraries that don’t.

(edit: view whole context, sorry, I described the part below the quote in the specific)

This fails to be better than just handling it where it happened, the same as the invalid value case in the other thread. You shouldn’t ever be in that program state as that requires having attached invalid data (either an invalid name to a user, or an invalid user, to that object) and passed it on unhandled.

Designing a synchronous equivalent to concurrent.futures — Launching parallel tasks — Python 3.13.0 documentation feels like it would be a good anchor for a result interface design.

For example:

results = list(itertools.try_all(f(x) for x in iterable))
results2 = list(itertools.try_all(map(g, iterable2))
results3 = list(itertools.try_all(f() for f in iterable3))

Internally, implemented as something like:

# Code block may be truncated, scroll to see the rest
class TryResult:
    def __init__(self, value=None, *, exception=None):
        self.value = value
        self.exception = exception

    def __bool__(self):
        return self.exception is None

    @classmethod
    def try_call(self, callable, /, *args, **kwds):
        try:
            value = callable(*args, **kwds)
        except Exception as e:
            yield TryResult(exception=e)
        else:
            yield TryResult(value)

    @staticmethod
    def split_results(self, results):
        successes = []
        failures = []
        for result in results:
            if result.exception is None:
                successes.append(result.value)
            else:
                failures.append(result.exception)
        return successes, failures

    # other methods as appropriate to improve
    # developer ergonomics (potentially even
    # proxying some key operations like
    # attribute lookups and indexing, propagating
    # `self` if an exception is already set)
def try_all(iterable):
    itr = iter(iterable)
    while True:
        item = TryResult.try_call(next, itr)
        if item.exception is StopIteration:
            return
        yield item

I’ve written code equivalent to the above many times (usually as a broken out loop, accumulating successes in one list and failures in another, hence the static method above to split a list of results into the two categories without the TryResult wrapper). The downside of the two list approach in a generalised API is that it’s difficult to associate the inputs with the outputs unless you return (input, result) 2-tuples everywhere (which means the input has to be a list of callables, it can’t just be a lazy iterable as in the examples above).

With a defined result type, the API consumer can choose which of those approaches works best for them. For example:

from itertools import try_all, TryResult
attempts = try_all(f(x) for x in iterable)
successes, failures = TryResult.split_results(attempts)
if failures:
    raise ExceptionGroup("Something broke", failures)

for input, result in zip(iterable, successes):
    # Do something with the successes
from itertools import try_all
attempts = list(try_all(f(x) for x in iterable))

for input, result in zip(iterable, attempts):
    if result.exception is not None:
        log.warning(f"{input} failed ({result.exception})")
        continue
   ... # Do something with the successes
2 Likes

Some additional thoughts on how the itertools.try_all idea would interact with future objects: since calling result() on a future already raises the exception if one occurred, their direct conversion to a TryResultinstance would look like TryResult.try_call(fut.result).

You could also use try_all to implement an alternative to concurrent.futures.wait and concurrent.futures.as_completed that waited for items in order, but timed out immediately if the wait deadline had already passed:

import itertools, time

def try_concurrent_futures(fs, *, timeout=None):
    if timeout is None:
        # Block indefinitely
        return itertools.try_all(f.result() for f in fs)
    # Timeout is relative to the start of the original call
    deadline = time.monotonic() + timeout
    return itertools.try_all(f.result(max(0, deadline-time.monotonic())) for f in fs)

The sync/async conflict means that the synchronous try_all couldn’t be used directly with asyncio.Future objects. The best that could be done is an equivalent async iterator API:

from itertools import TryResult

class AsyncTryResult(TryResult):

    @classmethod
    async try_await(cls, awaitable)
        try:
            value = await awaitable
        except Exception as e:
            yield cls(exception=e)
        else:
            yield cls(value)

async def try_all_async(aws, *, timeout=None):
    if timeout is None:
        for aw in aws:
            # Block indefinitely
            yield AsyncTryResult.try_await(aw)
        return
    # Timeout is relative to the start of the original call
    deadline = time.monotonic() + timeout
    for aw in aws:
        wait_timeout = max(0, deadline - time.monotonic())
        yield AsyncTryResult.try_await(asyncio.wait_for, aw, wait_timeout)

On the syntax front, given itertools.TryResult and itertools.try_all, the one candidate for potential syntactic support would be TryResult.try_call(next, itr), which could be more conveniently written as try next(itr). try expressions would always produce TryResult objects in that hypothetical future (with itertools.TryResult becoming a backwards compatibility alias for types.TryResult). AsyncTryResult.try_await(aw) would be replaced by try await aw.

Any such syntax proposal would likely need to come after a proposal for itertools.try_all had been accepted, though.

1 Like

I think it would fix it, yes. Take promises from the Javascript world - Before they existed the kind of stuff they were designed to represent had to be handled through callback functions passed in via an API, set to some kind of property or using some kind of third-party promises like library (Each which that has it’s own quirks and API)

Now when promises were first introduced this didn’t immediately eradicate all callback based code, but overtime as people realised that promises were the tools designed to represent the thing they were designing their API to handle, They adopted it and now whenever using an async API nobody needs to think about the exact API signature for handling a async result.

You can see this with Futures in python, Some libraries still have the old callback based way despite futures and await being the equivalent for async code.

I know that this isn’t a perfect analogy but this proposal is similar to Promises in the aspect that right now a certain type of pattern requires you to always hand roll or define custom types to represent it just as pre-promise javascript required you to hand roll a custom callback API (Even if it followed a convention) just because of the lack of a well defined convention for python.

Once the convention is established people will adopt it.

This prposal wouldn’t replace or prevent these libraries from returning these types, it just allows them to wrap these already existing types in a way that can more easily indicate to the end-user that an error that needs to be handled has occured (even if the user decides not to handle it).

From the library developers perspective adopting promises in javascript was easy to do since it mirrored the old callback APIs (i.e call the success callback for success and the error one for an error) allowing a smooth migration for those still using the old callback based approach and supporting promises for those who needed it.

Results would be the same, those who still want to grab the response objects still can but those who want to switch to results can do so incrementally. The code needing to generate these response objects remains unchanged just as the callback triggering code did in javascript and promises (And potentially could be easier in python through the use of decorators tasked with wrapping function/generator output into a result)

In that sense @Liz 's approach would fit this kind of ethos.