PEP 724: Stricter Type Guards

I don’t think it’s the job of the typing community to decide whether I’m writing my Python code in a way where strict or non-strict type guards are a better model. I think its job should be to provide constructs that permit the way people actually write Python to be captured by the type system.

There are situations where adapting to the type checkers has made sense, because the failure reflected an actual ambiguity in the code – namely, a function can return many types, callers should check what they get back or use a more targeted function, or else they risk a failure with unexpected input.

This case is not at all the same. This is telling a developer who has written a function that is well-described by a weak type guard that, no, they should be writing functions that are well-described by strict type guards because most people’s use cases for type guards call for strict ones. Using a PEP to micromanage the decision of how to write if/else statements seems wildly inappropriate, no matter how small the number of refactors you will be forcing is perceived to be.

5 Likes

What do you mean by “Python code”? We’re only talking about typing code, right? And surely it’s the job of the typing community to decide about typing constructs.

You can write whatever condition you want in Python before and after PEP 724. After 724, you would only be able to use type guards in a strict way. I don’t really understand the pushback against that decision?

That hypothetical developer can simply change his return type to bool or refactor his code? That’s a small price to pay compared with the alternative, which is to have both strict and non-strict type guards for a few years while non-strict type guards are deprecated, and then finally removed. Why go through all of this trouble for a vanishingly small amount of code that must use non-strict type guards?

The if/else statements haven’t changed though. The code runs exactly as it used to.

Several people in this thread are arguing that they should not be removed.

This is precisely the issue, right? Some of us want more stability even at the expense of adding more constructs and complexity.

The typing maintainer community generally wants the freedom to make these changes because they see them as better for the long-term health of the typing components of the language. But that’s not aligned with what a segment of the user community wants, which is for typing semantics to prioritize stability more highly.

If we’re going to talk about only the practical side of the matter, TypeGuard is probably fine to change in-place, as PEP 724 proposes, and that it’s no worse than other changes happening today. It wouldn’t impact much code as far as we can see from a very limited scan of open source projects. But there isn’t a very good rule right now for deciding what is okay to change and what isn’t, and this is part of a pattern of behavior which used to be fine but which I don’t think is long-term sustainable.

I’d like to see the attitude of typing shift, more towards stability at the expense of “cleanliness”, which is more like the stdlib. Try proposing a behavioral change to a stdlib function and you’ll probably be told “no” – even if your proposal would be an improvement for most users, the stability requirement for the stdlib is very high.
Changes are still made, but not without really good justification.

3 Likes

I’ll start by noting I think past several posts on my side/others are more about typing system as a whole and fit better if split in separate topic (Typing Stability/Documentation) vs this PEP. Could be moved to new Typing discussion area. They are good discussion and I think stability of typing and expectation of typing peps is important, but they apply to all typing peps and not really this one in particular.

The main issue with this is many of these “advanced” behaviors were not worked out enough. PEPs/documentation we have today is designed for core ideas/goals of feature. Details are often incomplete and true complete spec with look closer to research paper and require a lot more formality. Typing historically has value of pragmaticness and that it’s better to enable safer code/easier typing features in spite of full specification being incomplete and unknown. If PEPs for typing feel like they are too detailed today, they are too light on rules/details for goal of consistency/stability.

I’ll use one of your examples too as it fits well.

I will use whatever tools typing makes available to me to accomplish that goal. I’ll use a Protocol with __call__ instead of a Callable to express keyword arg types, because Callable can’t do it. Does that make me “advanced”? I have some object which accepts a callback, and the callback signature has kwargs. Sounds pretty ordinary. If we want to call expressing that case “advanced” then what realistic large programs don’t have advanced types?

The usage of protocol for callable is known as a callback protocol. This usage is undefined by peps/standard library documentation. The rules for callback protocol have evolved in backwards incompatible ways over past year/two. Expecting stability here when you are relying on feature that was never specified in PEP/standard library seems difficult. I think mypy has documentation on this, while pyright does not and has discussion over github issues on details. Many features that users use like callable protocol, there typical usage you can reasonably infer, but details of their usage is undefined behavior/inconsistent.

This example is also nice as mypy’s behavior on Callable types and kwargs changed in past couple days where it’s about to start allowing TypedDicts to be unpacked there motivated by user request. That behavior change is safe in sense of it adding new feature and existing code with no errors will have no errors. It does mean code with errors before will no longer have errors for type feature not defined/decided by any pep process and is another inconsistency on Callables.

I think today typing works in practice because rough rules and adjusting details as user reports appear has been successful. I’ll often report “bugs” were issue ends up becoming discussion over what is right behavior here/was PEP even clear.

Edit: I also view type checking as closer to pylint/ruff checking. Runtime behavior of your code should not change in backwards incompatible ways. The inference rules/type errors you see from mypy should be similar to pylint/ruff in expectations. If pylint changes its rules to be smarter in some way that is not expected to go through any pep process. Code linters are allowed to make changes as the maintainers find reasonable. And if user runs pylint a library change can influence pylint analysis results similar to type checker. Main difference is pylint is generally laxer then mypy/pyright as it has less type inference knowledge.

2 Likes

In my opinion, we still don’t have one good example of non-strict type guards that would not be better written using strict type guards. Do you have an example from real code?

I totally agree.

Fair enough. I think that typing is not like the standard library since:

  • typing does not normally affect code runtime (except in runtime introspection) and changing type errors is much lower stakes,
  • type checkers can be pinned to a version even as python is upgraded, and
  • typing is in its infancy and really benefits from being able to swiftly correct design errors without long and labor-intensive deprecation periods.

Anyway, I think we should probably move this discussion to the governance thread. What do you think?

1 Like

@ntessore’s example is_small_array is a fine example, IMO. I don’t know if it’s from “real code”, but it certainly looks like something that could be written in real code.

As @sirosen said, the goal here should be to describe to the type system the behaviour of existing code. So please don’t repeat the suggestion that is_small_array should be refactored - that’s putting the cart before the horse. Type annotations and type checkers are there to validate if your code is right, not to force you to write your code a certain way (whether or not that way is easier to prove correct). Maybe it’s not possible to do this in every case - some code uses runtime features that can’t be easily expressed in terms of static types, and that’s fine - but the existence of the current TypeGuard demonstrates clearly that this isn’t true in this case.

So we have an example. Are you now going to say that isn’t sufficient to question the idea of simply removing the current behaviour? If so, then what is your criterion for accepting that removal is going to cause problems for some users? Do you want two examples? A hundred? The PEP itself says that only 25 code bases were checked to ensure that they wouldn’t be affected by the change. Even one example that would be is equivalent to 4% of the test population. Is something that has a 4% chance of causing a problem acceptable? Yes, I know this is a silly argument. I’m trying to point out that the whole “how many examples can you come up with to support your case” argument is a bit silly - precisely because typing is now so widespread that getting meaningful samples has become essentialy impossible…

3 Likes

One relevant history is TypeGuard are closely based on typescript typeguard. Typescript has typeguards years before python and made opposite decision here of only supporting strict type guard. There is open typescript ticket for non-strict type guards (python like) that has real examples. The push to flip behavior of TypeGuard is the various user feedback “bug reports” where tickets are filed expecting strict behavior and surprised that current behavior is as spec stated.

The main cost in supporting both is user confusion and adding complexity to typeguard documentation. I think pyright in past did implement both strict and non-strict type guard, so it is definitely feasible to support both. If we do support both Strict and Nonstrict type guard which one should be default under name TypeGuard? Backwards compatibility would be TypeGuard stays non-strict. User expectations pushes to make TypeGuard strict and add separate LaxTypeGuard.

I was curious so I audited my own codebase usage of TypeGuard. There’s about 20 of them and I spot 1 that LaxTypeGuard would be correct for. The code is,

def check_concrete_type(t: type[T]) -> Callable[[type], Typeguard[T]]:
  def _concrete_type(cls: type) -> TypeGuard[type[T]]:
        return inspect.isclass(cls) and not inspect.isabstract(cls) and issubclass(cls, t)
  return  _concrete_t

So overall I think lax type guards definitely have good evidence they exist (typescript issue being best list), would lean negative narrowing is better default for common usage, and personally would be fine with either adding Lax/StrictTypeGuard. I feel adding LaxTypeGuard and recommending users tend to pick TypeGuard better fit for expectations then adding StrictTypeGuard and adjusting documentation to highlight strict first/more clearly.

4 Likes

I agree with your other two notes, but I disagree with this last point very, very strongly.
Typing has gone mainstream. Maybe pydantic and FastAPI were what put it “over the top”, maybe it was the addition of __class_getitem__ for the builtins in 3.9, maybe something else… Whatever it is, python typing has “hit it big”. Everyone who’s anyone is using it, watch out!

It’s still young relative to the stdlib or some other software projects, but it’s not very new anymore. Modern annotations have been widely available since python 3.5, so that’s 8 years of history.

I’d be happy to do so; certainly some of this is high-level directional stuff which is not specific to this PEP. I’ll see if there’s any useful contribution I can make on that thread.

On the other hand, as pertains to this PEP, the long-term view I’m promoting has some short term impact. At some point, the changes need to slow down. Is TypeGuard a strange hill to die on for this? Sure – as far as I’m concerned, I didn’t choose it. We’re talking about it because it’s the PEP which is on the table today. It could have been any proposal for a backwards incompatible change that got caught in this discussion. But we really need to start somewhere or we won’t make progress on stabilizing the behaviors.

For me, adding LaxTypeGuard and changing TypeGuard seems like it’s addressing some of the concerns – “can functions like X be described by the type system?” – but it punts on the bigger question of how typing can evolve to become more stable.

I 110% agree with you that TypeGuard being strict is a better default, and the naming would long term be better if we had TypeGuard/LaxTypeGuard.
But is it better by a wide margin, vs StrictTypeGuard/TypeGuard?

Given that StrictTypeGuard has the added benefit of being fully backwards compatible, and that I think TypeGuard/LaxTypeGuard is only marginally better, I can’t help but favor StrictTypeGuard.

1 Like

The question that I have about this is, if this is the case, why are any of these changes being made as PEPs rather than just the tools working them out on their own? The way I see it, by the time things get to the stage of a PEP (which may do something like alter the CPython docs), the dust should pretty well have settled. This makes typing seem like a moving target and likely contributes to the perception that various typing constructs are “advanced topics” (because using typing means you have to stay abreast of whatever the latest changes are).

1 Like

Absolutely. The idea that “rapid change is acceptable because typing is still developing” is, in all honestly, dangerously naïve. To give a very small example, I work on pip, and our code base is fully type checked. That’s purely internal, as we don’t expose a programmatic API, so changing our type annotations affects no-one but ourselves. But we use mypy in our CI, and if a type check were to fail, that would be a blocking issue that would prevent PRs from being merged[1]. If a check breaks because mypy changed something in a way that made our previously correct annotations now fail, that would require work to deal with it from an already extremely under-resourced maintainer team.

I just checked, and we have one use of TypeGuard in pip:

        def _should_install_candidate(
            candidate: Optional[InstallationCandidate],
        ) -> "TypeGuard[InstallationCandidate]":
            if installed_version is None:
                return True
            if best_candidate is None:
                return False
            return best_candidate.version > installed_version

It’s a local function with logic tied into variables in the surrounding code, and I have no idea how I’d change it, if it broke because TypeGuard suddenly started narrowing to None on a False return value. I think the code doesn’t care, but I can’t be sure. What I would almost certainly do is to change the return type to bool, effectively giving up on TypeGuard as a useful feature (at least in this context, and probably to a small extent in general). Is that really the result we want from this PEP?


  1. Yes, we can override the check, but we typically don’t. ↩︎

That may be a goal, but in practice code does accommodate the type checker. For example, you have to use LBYL with instance checks rather than branching with exceptions (EAFP) if you want the type checker to distinguish types in branches.

As for type guards, some patterns of usage will work, and others won’t. The point of this PEP is to allow new patterns to work, and make some old patterns incorrect.

I never said anything so extreme, so I really don’t understand this hyperbole. Let’s try to stay level-headed please.

I don’t consider it a great example because, as I said, it looks like it should have been broken into two pieces—regardless of typing, but just from the principle of the separation of concerns. But yes, it is an example. I think Mehdi’s example is much more convincing.

I don’t think it’s impossible. Maybe more projects should be added to the primer? Surveying type-checked code is an important tool in deciding type change impacts. Personally, I think it’s a lot better to gather information than it is to just imagine what code might be out there.

I think I should clarify what I meant: I’m not saying that no one is using typing. I’m saying that typing is in its infancy relative to all of the features that the typing community is waiting for. Python could stop being developed today, and I could still productively use it for a decade without wanting to switch language. Typing has come a long way, and does amazing things, but there are still features that we are desperately waiting for.

And this where my motivation lies. All changes need to balance the costs of the change:

  • induced maintenance work (refactoring, etc.),
  • induced false positives that break CI,
  • induced false negatives that hide bugs that would otherwise be found.
    —against the benefits of the change:
  • providing typing expressiveness,
  • making typing easier to understand,
  • repairing false positives and negatives, and
  • perfecting the future world of typing for future users.

So when I think about this PEP, I see a very tiny amount of induced maintenance. We have a couple examples so far that might need minor tweaks mainly to prevent false negatives (which are not a huge deal).

The benefits are significant: strict type guards are more useful, easier to understand, they repair a number of flaws in the type shed (e.g., isawaitable, and isdataclass), and they make the future of typing significantly better for future users.

In case, you haven’t seen the linked issues that were added the PEP, here they are:

Many people have been asking for strict type guards. Of course that doesn’t prove that there aren’t just as many people who are perfectly happy with lax type guards. Although, it does seem that in the standard library, there is a need for strict type guards, but not for lax ones.

That’s why I like Mehdi’s proposal best. I like PEP 724’s proposal to make TypeGuard strict. Then, if there’s enough demand, we could consider adding lax type guards with a clumsier name (or flag). This will steer users towards the strict type guards, which is probably what they want. I feel like future users will thank us for making their lives easier.

I realize we may not agree on this. I know that I tend to put a lot of weight the future. But hopefully you understand my point of view.

2 Likes

Speaking as someone who has maintained large code bases for Jupyter in both TypeScript and Python as they’ve evolved their typing features, we have long taken the stance of pinning the version of TypeScript/mypy until we are ready to adopt new features.

In both cases we are starting from an untyped system and trying to bootstrap a typed system, and things will not always go smoothly. The difference between TypeScript and Python is that we don’t necessarily control which type checker will be used by both maintainers and consumers of the libraries.

Perhaps what is required is a declared “requires-typing” version, similar to “requires-python” in pyproject.toml, where a library explicitly says which version of the typing spec they adhere to, and that could be honored by both IDEs and CLI tools, both within the library and from downstream consumers.

4 Likes

Apologies, but I genuinely don’t know what it is you are trying to say. Multiple people have said that they have code that uses the existing semantics, and very few people have said they have audited their existing usage to confirm it will still work under the changed semantics. I’m genuinely baffled as to what you’d need to accept that there’s a case for keeping the existing semantics - I’m not even asking you to agree, just to accept that there’s a case for it.

As a part of the “typing community” the key feature that I would like at this point is stability. And I agree that typing is in its infancy as far as that is concerned…

Obviously, and I didn’t say otherwise. But it’s important to acknowledge the limits of what you can find. We’ve struggled with that in packaging. There is a huge amount of usage in closed-source code, and it’s often using features in different ways than you see in open-source projects. I have no reason to believe typing would be any different in this regard.

I think you’re optimistic about “minor tweaks”. In the example I gave from pip, I’m not even sure if I’d know how toconfirm if it still works with strict type guards, much less know how to refactor it.

We definitely don’t agree. As to whether I understand your point of view, all I can say is that it seems to me that you don’t value stability (and in particular, the stability of documented features) as highly as I do - if that’s not how you’d describe your point of view, then I’m afraid I don’t understand.

Also, I find your assertion that you “put a lot of weight [on] the future” very odd. I would say that I put a lot of weight on the future, and for me that means setting down firm roots, making sure the foundation of the typing ecosystem is stable, well-documented, and reliable. Once those foundations are achieved, then building new features on top will be far less disruptive to the community, and we all benefit.

2 Likes
def _should_install_candidate(
    candidate: InstallationCandidate,
) -> bool:
    if installed_version is None:
        return True
    return candidate.version > installed_version

if candidate is not None and _should_install_candidate(candidate):
    // install the candidate

In the absence of any other context, if I were writing something like this, I would accept the hard type (not None), and do the null check separately.

1 Like

I generally agree with @NeilGirdhar view. I think there’s very different stability expectation in runtime library vs static analysis tool. I consider it important python runtime type behavior is stable. The type checker inference behavior and what errors are shown I view with same expectation as pylint/ruff/similar tools. All static analysis tools I expect to be pinned to one version in CI. pip similarly pins to mypy==1.0.1. pip type errors are not same on latest version of mypy as current one (very similar differ by ~3 errors). Pip’s type errors are more different if you run it with different type checker.

My code changing behavior and failing is very categorically different to me then my linter/type checker adding 5 new error messages in a release. Often I’ll look at a new error decide it makes sense and correct or disagree/unsure and may type: ignore/revisit later. A few type ignores is pretty normal and while having heavy amount of ignores/cast is bad, python type checking system is not all or nothing affair for value. If most of my code passes type checking and a few places don’t that I ignore/revisit later that’s still value for me.

My experience is also other static analysis tools like pylint it is common for them to have errors change across versions. Pylint has been in usage for years prior to mypy and even know I think pylint/flake8 should have usage pretty comparable in magnitude (maybe larger/smaller) than mypy. Should those libraries also have strong stability expectations and be involved in PEPs for changes? Pylint/similar tools also commonly have IDE integration.

For packaging comparison I’d consider mypy/pyright exact errors closer to poetry/hatch’s configuration choices. Packaging libraries share common standards, but packaging libraries also have freedom to make many of their own maintenance decisions. Poetry deciding to deprecate a config feature in month vs year is their decision. This also similarly applies to pip. Pip has it’s own deprecation policy it follows separate from python deprecation policy. Mypy similarly documents itself as not following SemVar because,

Mypy doesn’t use SemVer, since most minor releases have at least minor backward incompatible changes in typeshed, at the very least. Also, many type checking features find new legitimate issues in code. These are not considered backward incompatible changes, unless the number of new errors is very high.

Quoted directly from mypy release notes. Expecting type error stability across versions directly disagrees with mypy’s maintaince policy and feels similar to saying that packaging libraries should have deprecation policy similar to python language.

Other aspect is in practice as user much of type checking error instability does not come from type checkers. It comes from library stubs/types. For many python libraries their type hints are incomplete. As they evolve it is expected behavior for type checkers to report new errors. Numpy/pandas/matplotlib data science ecosystem in particular it’s types are in a lot of flux/evolution right now. It’s normal for updating matplotlib version to impact type errors reported. And as user whether backwards compatibility is broken due to type checker vs library types changing feels very similar. Either way I need to review new type errors and decide to adjust my code/ignore them.

edit: Another aspect is mypy policy one of several for type checkers. Each type checker has separate maintainers with some overlapping goals, but also separate goals. One type checker may value stability higher than another. Another type checker may choose to update more frequently and have different deprecation policy/standard on what is reasonable change for errors. PEP does not feel suited for deciding maintainance policy of multiple libraries released separate from python language and some by fully separate owners.

4 Likes

I don’t really see why that should be the case. A static analysis tool is basically just an application whose data is a program’s source code. Stability for such a tool is just as important as for any other tool, like a command-line tool that, say, converts BMP to PNG, or computes lexical statistics on a text. The thing that, in my mind, implies greater stability is the fact that something is in the stdlib. That would mean that something like mypy is free to evolve more rapidly in terms of what typing constructs it handles, but that the stdlib shouldn’t attempt to keep up with that.

I agree. This to me is the fundamental issue, and is one reason I have an overall pessimistic view of the various typing changes in Python. My perception is that the increasing spread of static typing is leading more people to spend more of their time trying to please the type checker, looking for new ways to write their code in order to chase some perceived benefit of being able to use a particular typing feature. The type checker is often not easing the work that people are already doing, but causing them to add an additional kind of work (typing-specific code gymnastics) to their load. This workload then spreads to everyone who has to interact with such code, let alone contribute to projects using typing, because it becomes part of everyone’s expectations that any work on writing Python will include some nonzero amount of typechecker wrangling.

I do wish we could stick to the original notion, which is that everything related to static typing is 100% optional in Python. To me that means that typing-related considerations should never have any influence on how code is written or what it actually does; it’s purely a convenience layered on top. It means that any proposal that envisions people refactoring their code to please a typechecker is prima facie misguided. If people want to do that, it’s their choice, but nothing in the official documentation, a PEP, the stdlib, etc., should contain even a whiff of a suggestion that anyone should ever do that.

2 Likes

It seems there’s two distinct camps here:

  • Backwards compatibility is sacred for Python, so it should also be for the type specification.
  • Backwards compatibility is not sacred for the type specification, and breaking it outweighs the downsides for this specific case

To move this forward, is the addition of StrictTypeGuard (instead of changing TypeGuard) more palatable?

Eric Traut had proposed StrictTypeGuard in the thread that the PEP was based on, but the responses had a slight preference for breaking backwards compatibility. I don’t believe anybody really opposed StrictTypeGuard though.

I think the responses so far provide strong evidence against the current proposal of simply changing the meaning of TypeGuard. Which is sad, because it seems clear to me that the newly proposed semantics are better, and it’s confusing for users to have two objects that do almost but not quite the same thing. However, for a feature specified in a PEP, maintaining compatibility is important.

I don’t like the name “StrictTypeGuard”. “Strict” can mean a lot of things, and it’s not particularly obvious that the new behavior is more “strict” than the existing one. It’s different, but not necessarily more strict.

If we add a new object, I think it would make sense to make it support only the “strict” version, in the terminology of the table in PEP 724 – Stricter Type Guards | peps.python.org. That is, we would require that the StrictTypeGuard return type R is consistent with the StrictTypeGuard input type I. Users who want the “non-strict” behavior would simply continue to use TypeGuard.

If so, we could say that the new construct always narrows the type to a type that is narrower than before. Could we come up with some name that incorporates this concept? Perhaps TypeNarrower.

6 Likes

I never said that there “wasn’t a case for it”. Just like you, I’m considering the case of keeping the existing semantics, but I personally think they should be removed. (More on why below.)

Maybe we should ask the PEP writer to add a migration guide? Ultimately, any flavor of type guard can be rewritten without type guards. They are typing sugar. The current type guard can always be rewritten:

def is_u(val: T) -> bool:  ... # defined as before, but just return a Boolean.

and then used as follows:

def f(val: T):
    if is_u(val) is not None:
        u = cast(val, U)
        # Use u instead of val from here on to get the `U` type you wanted.
    else:
        # val is unchanged, as desired.

Let’s examine the future under your proposal of keeping TypeGuard versus the future that’s suggested by the PEP and pretend that we’re a new user who has to choose a type guard for a function that she’s writing, and ask which future is a better one to live in.

First, some background on the various type guards:

We have the current TypeGuard:

def is_u(val: T) -> TypeGuard[U]: ...

def f(val: T):
    if is_u(val):
        # Type of ``val`` is narrowed to ``U``.
    else:
        # Type of ``val`` remains as ``T``.

Now, the PEP 724 “strict” TypeGuard:

def is_u(val: T) -> TypeGuard[U]: ...

def f(val: T):
    if is_u(val):
        # Type of ``val`` is narrowed to ``T & U``.
    else:
        # Type of ``val`` is narrowed to ``T & Not[U]``.

Note that this is extremely logical since it can work exactly like an instance check for U.

And the proposed LaxTypeGuard:

def is_u(val: T) -> LaxTypeGuard[U]: ...

def f(val: T):
    if is_u(val):
        # Type of ``val`` is narrowed to ``T & U``.  Note the difference!
    else:
        # Type of ``val`` remains as ``T``.

Now let’s compare the benefits for a new user in each future.

In the “stable future” that you’re proposing I guess you want to keep the current type guard, and add the strict type guard? In that case, the stable future has the following problems:

  • There is doubt about which type guard is required, which requires a deep understanding of the documentation.
  • Using the current type guard requires learning a new reasoning pattern that is unlike instance checks.
  • The current type guard has a lot of surprising behavior based on all of the bug reports against it. In particular, it does not narrow T (replacing it with U in the positive case). This will probably necessitate adding LaxTypeGuard, and maybe deprecating it anyway.

The “progressive” future that I’m proposing would have the strict type guard only. Thus,

  • There is no doubt about which type guard is required, which guides new users to the obvious choice.
  • Using the type guard can work exactly like an instance check, which makes it easy to understand.
  • If there’s a need, a LaxTypeGuard can be added. It has the benefit of mirroring the strict type guard–without the surprising behavior. It’s a bit trickier to desugar than the current type guard, so the case for adding it is stronger too.

These are two futures that I was comparing, and this is the basis for my motivation. I understand the desire to mitigate upgrade pains, but I think they’re outweighed by the benefits of creating the better future.

2 Likes