Options for a long term fix of the special case for float/int/complex

This topic does not seem to be going in a productive direction. Two users are monopolizing it with their arguments/discussions, and do not seem to be reaching any conclusion after a full day of posting back and forth. If you find yourself doing this, stop, step back, and reply after a few days.

16 Likes

With some hesitation, I’d like to offer my 2 cents.

My understanding is that the discussed matter (as per this discussion forum) only impacts typing, not runtime. Recognising backward compatibility worries, it’ll ‘only’ fail type checker reports and CI/CD pipelines, not the ability to run the code or its outcomes. A well-placed # typing: ignore comment could provide a (temporary) fix.

With that understanding, my first cent:

Changing the interpretation of the typing annotation float from float | int to mean StrictlyFloat (or however that would be written) reminds of the time the interpretation of SomeClass = None changed from SomeClass | None to mean only SomeClass.

That didn’t cause too many issues, IIRC, and I believe mypy had (has? cannot check because on mobile) an option to suppress flagging (i.e., retain old interpretation).

We could follow the same transition approach. Perhaps there are more float annotations needing change to float | int than there were SomeClass annotations needing change to SomeClass | None. But I’d wager they’re easier to identify and occur more clustered.

The remainder of my 0.02€ capital:

If it’s decided annotation float will remain meaning float | int, and we’d therefore need type differences before we can express type StrictlyFloat = float - int, can we not ad interim already give that meaning to StrictlyFloat?

That could be either by a special rule, or by a Protocol using float’s only unique method hex().

That way (library) authors/maintainers could already discern between meaning float (i.e., float | int) and StrictlyFloat, without needing to wait for type differences.

Either way seems a navigable route forward to me. Should I overlook something, please don’t take my shortsightedness for bad faith.

9 Likes

The primary objection a few people have had is that this would break existing typings.

So here’s a few points on that, and a question that I think is very important.

  • This won’t break runtime.
  • This will only break typings that rely on this special case. One can argue this is not breaking something, but surfacing an unsound assumption to users that already were relying on something unsafe.
  • This will bring typing to be accurate to runtime.
  • This does not require waiting for new type features that may never happen, or might happen in a way different from how we are envisioning them in this thread.
  • Any fix for the underlying problem will require typings be updated
  • According to Jelle, while there was a lot of disruption, a very large portion of it was solvable with a naive codemod
  • This fixes a point of unsoundness in the type system.
  • This fixes that right now people can’t accurately express their intent with a very basic type.
  • It would be possible to do this in a coordinated manner over time to avoid users being unaware

With all of those points in mind, and that the primary objection has been that this would be “too disruptive” and breakage in general, what’s the policy for breakage with typing, especially if that breakage is to fix an issue?

I can’t find an existing policy that seems to apply to this case, as this doesn’t break anything at runtime or anything included in CPython, and this fixes an issue with the type system and makes it simpler in the process.

If the unwritten (or just one that I couldn’t find) policy is that we can never fix these cases, then I have some objections to how many other discussions about typing have been treated to say that incremental change that isn’t fully sound is actually okay because we can continue improving it, as the two ideas seem to be incompatible.

9 Likes

Generally it is based off mypy and pyright primer and whether it was documented. Undocumented type system behaviors with minimal primer impact are generally acceptable changes. If it is documented and has minimal impact it often requires a pep. TypeGuard related work with TypeIs recently is one very good example. Existing type guard semantic was documented. It was surprising for many users but it was documented and while primer impact was mild due to backwards compatibility it was decided to not be changeable.

This specific case is both explicitly documented in pep 484 and has significant mypy primer impact. Either one of these alone would be enough to make it problematic to change without deprecation process which is difficult here to do.

So I consider option 1 to be very hard to do with a reasonable deprecation path that does not create a ton of noise for most users. Other options (4 mainly) I’m more comfortable with.

As an example of area that has more flexibility, overload resolution. That is mostly undocumented today and new overload rules with mild primer impact would likely be more acceptable change without deprecation period.

3 Likes

Speaking as a library implementer, it honestly surprises me to know that anyone would ever write float to mean float | int, runtime behavior notwithstanding. In our projects we already always write float | int when we intend float | int.

8 Likes

Is the process linked in the post you responded to not sufficient? If not, what policy states that it isn’t?

Right now there’s another change to the behavior of this special case being considered which changes the meaning of pep 484. This change has not received the same backlash from the same people. Some people are phrasing it as only a clarification, but it changes the behavior and the thread has examples of it being or enabling a behavioral change in the very first post.

It was also determined that TypeGuard wasn’t incorrectly specified; Users just wanted a something with other behavior.The better parallel was already shared by someone else, where things were broken on implicit optional with function parameter defaults, being changed to match what the user wrote rather than infer something beyond what they wrote in a way they could not opt out of.

The float special case on the other hand is just wrong. Int isn’t a suitable replacement when expecting a float, and the shortcut that exists in the type system is unsound.


Unfortunately, due to the slowmode still here, I need to post all ideas, even unrelated ones at once:

I did find a place where we can choose to be more lax:

Some type checkers early bind on inference

x = 1
x = x / 2

With the changes suggested in 1 here, any type checker early binding on inference of numeric types might have type checkers wanting to change how their inference works to either not early bind, or early bind in a permissive form, but allow specifying a more specific form that would be respected.

2 Likes

It looks like you maintain bokeh (thank you for your work! I’ve used bokeh, and it’s been very useful). There are many places in bokeh that rely on the current int/float promotion; you can see those in the output of Experiment: Remove int/float promotion · python/mypy@d8af8fc · GitHub.

For example, here is a function in bokeh that has several parameters with a type of float but a default of type int: bokeh/src/bokeh/driving.py at 563ac0e85a48374d378f5d8cec163fdb288659b5 · bokeh/bokeh · GitHub. Here’s an example where bokeh passes an int to a parameter that is marked as accepting only float | None: bokeh/src/bokeh/colors/color.py at 563ac0e85a48374d378f5d8cec163fdb288659b5 · bokeh/bokeh · GitHub.

These places in bokeh work fine at the moment. Changing the float/int behavior in the type system would mean that lots of people maintaining working typed code bases would have to make changes that in most cases do not fix a real problem.

10 Likes

I think you are missing one very important point that Mehdi also alluded to. I.e. it was documented behavior for a very long time. A not insignificant number of people that work with typing have written type hints with this special case either consciously or unconsciously[1] in mind, have come to rely on it and it has become part of their muscle memory. I very much fear the amount of tedious bug reports this will cause where people forgot to add | int to a function, consuming people’s time and patience. I think most people already have fairly low patience for type hints, let’s try to not make that worse.

Same goes for the tooling that has evolved alongside the type system. If a linter keeps complaining to you about int | float because it is technically redundant[2], just like a linter would if you redundant entries in an isinstance call or an except clause, you will internalize that rule and it will be difficult to unlearn now, especially because tooling can’t really reliably help you go the other way, now that you’ve written float instead of float | int[3].

I think the type system was being pragmatic with this special case and while it causes some false negatives, there are very few places where it will cause a false positive. False negatives are always easier to justify than false positives, especially to typing novices.

While I agree that this case introduces an annoying unsoundness into the type system that wasn’t really necessary at the time, it is much harder to get rid of it now that it has been there for all those years and you’re going to upset people with the amount of tedium it causes if you try.

Is it really worth it? Your motivation makes sense to me, it can lead people to write worse, slower implementations in numeric code that interfaces with native code where speed matters, just to match the static behavior and avoid a false negative. But it would be even faster to make the API use something like ctypes with more or less zero conversion overhead instead. That way you don’t make it the responsibility of the library to support all the various Python numeric types in every function and can instead provide a set of converters to create your fast numeric types from Python numeric types, some of the converters will be fast and some will be slow, but you at least will be as fast as you could be without introducing overhead into the core API. Just like numpy does it for numeric arrays[4].

Introducing a new spelling that means “really just float” comes with its own pitfalls and maintenance burden, but overall I think the transition could be a lot more gentle and gradual if the recommendation is to initially avoid using it in parameter annotations as much as possible.

It’s also worth noting, that while mypy_primer provides helpful intuition of how difficult it is to change something, it is only the tip of the iceberg[5] and while it is trivial to transition all of typeshed to the new semantics it will be much more challenging to do this in every other code base. Your transition plan could ease that pain significantly, but I’m still not fully convinced it’s worth the hassle. You are still asking a lot of people to invest their time and effort into fixing this one relatively small annoyance.

Since the new interpretation of float is just syntactic sugar in annotations[6], rather than a full blown special case, I also don’t really buy that this would cause downstream issues in other typing features. It just means that there are a couple of nominal types you currently can’t directly express, which is annoying, but also not that big of a deal, it certainly doesn’t block other work.


  1. Python as a language already encourages the pattern to pass int to functions that accept float ↩︎

  2. and some do, e.g. flake8-pyi and the corresponding rule in ruff ↩︎

  3. save for a few obvious places where the default value is a literal int ↩︎

  4. in numeric code you often calculate the same thing over and over, so I don’t quite get why you wouldn’t use array types here in the first place, optimizing scalar numeric algorithms seems kind of foolhardy in Python, not that I’m accusing you of doing so, your arguments just don’t make as much sense to me otherwise ↩︎

  5. there are tons of very larger internal code bases that use type hints or smaller libraries that are still relied upon by a significant number of people, but not quite enough to make it into mypy_primer ↩︎

  6. just like None for NoneType is ↩︎

4 Likes

So, I don’t feel like I’ve actually gotten an answer on this. I don’t care if the actual policy says it can’t be fixed, but I would like to know what the actual policy is since I can’t find it and people have not pointed to policy, only to past decisions. Similarly, if the policy isn’t hard written in stone, that’s itself an adequate answer if that’s expressed, but I would like to know the confines I’m working within here.

My own inclination here is that if we’re looking at past decisions rather than policy, it was acceptable to break int at runtime in a patch release when people were doing the wrong thing (trusting user input) and the impact of this was far beyond the scope. This is also a case where something is doing the wrong thing, only it won’t even break existing code, only type checking of that code and fixing this if you are broken by it is extremely simple.


No, I haven’t. I specifically have addressed this history here. Even so, if the standard here is “it’s been this way for years, so we can’t change it, even to fix it”, what does this say about incremental improvements to the type system or the ramifications of accepting any typing pep?

Will it actually be better if you tell people “Actually, you have to import from typing this special form to use float in a return type when it’s actually a float, otherwise your library can’t interact with code that only works on floats” , what about if users are told to wait for difference types?

In the interem, they would get a cryptic type they can’t denote in some cases, and then if it actually happens later, they’d need to use a difference to express the LHS of that difference.

Which of these is obvious to people who don’t use typing much and would be annoyed by typing changes?

option 1, just fix it


def fn(x: float | int, y: float | int) -> float:
    return x / y

def fn2(x: float | int):
   if not isinstance(float):
        x = float(x)

x: float = ...
if isinstance(x, float):  # linter can warn about a useless isinstance check
    reveal_type(x)  # revealed type is `float`

option 4, add a special type for this

from typing import OnlyFloat  # adds an import cost

def fn(x: float, y: float) -> OnlyFloat:
    return x / y

def fn2(x: float):
    if not isinstance(float):
        x = float(x)
    ...

x: float = ...
if isinstance(x, float):  # linter shouldn't warn about a useless isinstance check.
    reveal_type(x)  # revealed type is `OnlyFloat`

vs the most optimistic case, where we have user denotable difference types

def fn(x: float, y: float) -> float - int:
    return x / y

def fn2(x: float):
    if not isinstance(float):
        x = float(x)
    ...

x: float = ...
if isinstance(x, float):  # linter shouldn't warn about a useless isinstance check.
    reveal_type(x)  # revealed type is `float - int`

All of these options only works if people update return types appropriately, so there’s still going to be churn here for this to play nicely at a bare minimum, this falls flat if the unwillingness to have that much churn includes not being willing to update the standard library types in the typeshed. The majority of the issues in that primer run above are actually sourced from the standard library and would be fixed by upstream typing.

I started this thread with the title “options for a long term fix…”. While I see how people would want this being averse to breaking people, are the options above truly better long term? I don’t think they are, especially if your lens for it is people who “already have fairly low patience for type hints”.

Please do not make assumptions about my use case, especially when those assumptions contradict what I have explained in the very beginning. We treat arbitrary precision numerics differently from floats, and this is intentional. It’s not about conversion here, it’s that even “fast” operations are slower when passed the wrong type. The intent is not to convert by the time it reaches the wrapped code. Any conversion would imply a math error or would require an assumption on my part that the mathematicians got it wrong about it the required precision. This is about being able to express that to catch issues with the wrong type being used earlier in the process without introducing extra runtime overhead.

Some of my work includes supporting mathematicians that only do a little bit of programming. While they will understand any option we go with here that actually allows expressing this, the ability to define functions that if they receive an arbitrary precision value behave differently, as well as functions that only accept lossy data types is important. This allows expressiveness from the people writing the math to translate over into code. Where keeping a certain precision is needed, we use types that preserve it, otherwise we don’t. This consequently means that some operations need to call libraries like gmp that can do arbitrary precision math efficiently, and others don’t, and get to be faster. It’s a problem when doing batched calculations that you expect should be using float math, don’t.


It’s actually a big deal for my use case. While you don’t see it that way, it’s something that has come up repeatedly for me as well as others, and recent events have caused me to decide to pick up the issue and try again. Non-denotable basic types for which there are literals are not a “good thing”

As for introduced inconsistencies, I can think of few immediately, the first being isinstance, while it may not strictly block future work, every special case complicates future work.


Most affected libraries in the primer run above have fewer than 5 places where they would need to replace float with int | float. It’s worth noting that primer run also found a few places where float | int would be unnecessarily wide, and they could be using int as well, which might indicate that the current status quo is causing people to give up on even trying to type their code for numerics accurately.

Nevertheless, someone was nice enough to provide me a rough draft of a codemod that could be refined and polished to help ease this (MystBin). After seeing how little most libraries would need to change, that it is feasible to automate “get me to where I was before this typing change”, and that most of the issues are in or come from the standard library’s typing causing a problem, I’m not convinced a long breaking period would even be necessary.

4 Likes

The only policy is in PEP 729, the mandate for the Typing Council, which includes:

Stable: As the type system matures, users should be able to rely on their typed code continuing to work and be able to trust their mental model for the type system. Changes should be made with care and in a way that minimizes disruption. Nevertheless, the type system should be able to evolve, and it does not make sense to use the same compatibility guidelines for type checker behavior as for Python itself.

Of course, this is not very concrete, and intentionally so. It is clear that removing the float/int special case would cause a lot of disruption. You clearly believe that disruption is worth it. Others may believe the current state is actually better.

13 Likes

In defense of the current state, as Guido mentioned, Python was designed to allow float and int to mix. One example of this design is that the hashes of equal floats and integers, or equal floats and complex numbers is the same.

In general, I think this is a rare example where the ideal solution is to change code rather than annotations:

  • If passing an integer to a function that accepts float is a problem for the function, then the ideal solution would be to make the function work with integers. This is consistent with other functions in Python.
  • Similarly, if a function says that it returns float, then clients currently know that they may receive an int. If that doesn’t work for them, the ideal solution is to change the client code.

In the rare cases where you must accept only a float or return only a float, then it may be worth adding extra types if we can find some motivating examples. However, my guess is that many of these examples would be better served using a numerical library like NumPy and its arrays instead.

5 Likes

As one of the people with “low patience for type hints”, as @Daverball put it, I can say that I’d be really irritated if I had to use float | int everywhere, just so that I could pass 12 instead of 12.0 when calling the function. And I’d be absolutely furious if I found out that linters objected to float | int, so I had to act as some sort of peacemaker between the linter and the type checker…

I can see why it’s more reasonable to interpret a return type of float as meaning “definitely a float, not an int”, but that is much less significant to me (I can’t recall or imagine ever writing a function that needed to declare that it returned precisely a Python float, and not something like numpy.float64). So yeah, if you focus on the return type I guess there’s an argument for changing, but IMO that’s the wrong thing to focus on.

Nope, absoluetly not. That’s non-obvious and annoying (to me, at least).

Maybe. But rather than OnlyFloat, I’d imagine I’d more often use numpy.float64 or similar. Most of the time when I was returning a float I’d just use float, and I wouldn’t care that it technically allowed an int as well, because all of my arguments and variables are also (declared or inferred as) float (implying they will accept int as well) and so are compatible.

My attitude would be “good for the people who really care, they have a solution if they need it” but I’d ignore float - int as too complicated for my needs.

I’ll reiterate that I’m speaking as someone with low patience for typing complexity, but in my view I’ve never seen the current behaviour as a problem that needs fixing. It’s a theoretical wart, maybe, but in practical terms things just work, so why waste time and energy devising a “solution” that gains me nothing, but forces me to express my types in a way that feels less natural to me?

That’s fine, and I support finding a solution for your needs. But I’d suggest that your users are very much a minority, and any solution should reflect that - make the minority do a little extra work to get what they need, and ensure that the majority don’t pay a cost that offers them no benefit.

14 Likes

There’s years of issues to read through for motivating examples. To this day numbers doesn’t work well with typing, and is largely an abandoned experiment.

I tried finding a way to limit it to the input type, but @mikeshardmind showed pretty quickly that it creates a compatibility problem if the entire ecosystem is returning too wide due to automatic behavior of a type, it means you can’t have libraries work with each other unless everyone does this.

If you have a function:

def div(a: float, b: float) -> float
    return a / d

and float means float | int nobody can call this function if they need a float, this is going to result in drive by typing PRs changing that last float to float - int for things that type checkers aren’t erroring for currently, creating more confusion and tension between typing and non typing users on what libraries are safe to use.

But we also can’t make it only mean that some of the time:

def add(a: float, b: float) -> float | int:
    return a + b

add(1, 2)

there would be too many functions that looked like this for users to get annoyed at inconsistency with.

At least if people need to use unions on input parameters, they’ll get a type checker warning if they don’t for having too narrow a type to support their intended use. There is no warning for returning a wider type than necessary, and there shouldn’t ever be one, this could be intentional to future proof an API.

Some people want the type checker to guess they want the common case to do less work, and other people don’t want the type checker to guess the common case, even in functions in other libraries they use so that they can do less work. All of the solutions other than option 1 that I’ve seen so far all create an issue where the default case gives no indication from a type checker that a type parameter could cause a problem, and there isn’t a way to change that without adding false positives.

2 Likes

I care mostly about float(x) == x, because weird things can happen if that’s not the case:

import random

a: float = 18014398509481985
b: float = 18014398509481986
c: float = random.uniform(18014398509481985, 18014398509481986)
assert a <= c <= b  # AssertionError

I think you would expect to get a value error in this case as there are no floats between a and b.
How would you express this? Is this even possible in the current type system?

def uniform(a: FloatValue, b: FloatValue) -> FloatValue: ...

In a follow-up chat with @mikeshardmind I’ve come up with another option that could be a good middle-ground that will satisfy both the people that like the terseness and looseness of the current float interpretation and the people that need a distinct float type to catch problems, without burdening either side with more work than they’re willing to put in.

Something that has been proposed multiple times in the past was the AnyOf construct, which is essentially a gradual version of a Union, i.e. it’s bidirectionally compatible with its members. So if we change the interpretation of float from float | int to AnyOf[float, int] and encourage type checkers to add a strictness flag to treat it as just float within the modules where we care about the precision, that should allow both interpretations to live alongside each other without causing friction between the two worlds, since they’re bidirectionally compatible.

That way we can commit to a much smaller goal of ensuring the stdlib stubs in typeshed are written with numeric strictness in mind and everyone else can gradually change over if they wish. It puts a small amount of additional burden of correctly setting the flag for the third party modules that should have it, but I think that’s a relatively small price to pay for the people that care.

It also creates a clear distinction between float[1] and float | int[2], making both annotations viable.


  1. I don’t care, but I think it should probably work with int ↩︎

  2. I’ve thought about and made sure both types work ↩︎

2 Likes

I feel a bit dumb after reading this thread. Why would introducing a type that means “only a float” mean that the whole ecosystem needs to switch? I’ll provisionally call that type real to not confuse it with possible upcoming disjunctive types.

So if we can have these properties (and I’m sure there are pitfalls):

  • real means anything that is guaranteed to be float at runtime
  • float means int | real
  • … and care is taken to keep things like isinstance(float, ...) working (there’s probably something lurking here)

, then wouldn’t this enable projects that care to use internally the new real type to mean really a float and only care about conversions from int in externally interfacing code where they cannot dictate the (static) type?

Wouldn’t that be a lot better than having to worry about every function that gets or returns a float accidentally having an int in their hands?

And then if this turns out to be good, presumably linters will at some point be able to shame people who still use float.

This proposal makes all code using float annotations (that doesn’t opt in to the new strict mode) much less safe than it is today.

Today the unsoundness that has been mentioned in this thread is limited (as far as I’m aware) to the use of two rarely-used float methods (.hex and .fromhex), and to FFI / C extensions that require real float objects. Other than those specific cases, the Python language has been designed (with intention, not by accident) so that int is substitutable for float and both int and float are substitutable for complex. [1]

With this AnyOf proposal, the unsoundness would become much, much worse. Now this clearly-wrong code would pass type checking:

def f(x: float) -> int:
    return x

That seems like a lot to give up. It’s certainly not clear to me that introducing much more unsoundness is the right way to address the “friction between two worlds” problem. I’m not really even clear that the “friction between two worlds” is a problem that needs solving.

No matter how we choose to spell “really-a-float”, I think it’s reasonable to start annotating functions in the stdlib/typeshed or other libraries that actually always return a float (and don’t mind committing to this for the future) as such. Some library authors may not want to commit to this, in which case a caller who needs a narrower type needs to check the return value. This is no different from anywhere else in the type system.


  1. An earlier post in this thread suggested that there were non-specified other issues with complex, and mentioned the cmath module, but I don’t know what those problems are; as far as I can tell the functions in the cmath module are perfectly happy if you give them int or float objects. ↩︎

12 Likes

Thanks to everyone who provided further feedback on the options themselves and allowed putting aside for a moment that a road to some options would be harder than others to justify.

I spoke with a couple of other people about this; While I’m not a fan of some of these options, after getting clarification on something that wasn’t clear to me and confirming there is room to change this twice if things are considered carefully each time, I’m less strongly attached to something like option 1 and jumping right to do just “make it correct with the tools we have now”. Some of the initial arguments were so attached to not breaking that it seemed like any breakage, even necessary breakage, would be so hard fought for that there would only be one shot before it was just too much churn.

I still think that would be appropriate down the line to fix this on correctness, but there are some other changes I’d like to see happen before then, namely definable rules for inference regarding builtin types.

The long term ideal that actually satisfies both group’s needs here, where users should not need to think about their numeric types most of the time, is that if they are working purely with python builtin types, an annotation shouldn’t be needed at all, and inference in that case should be the appropriate union of plausible types, rather than Any (falling back to specific types based on a definable set of rules based on decidable type interfaces, with Any as the remaining fallback)

I think there are some important steps to that being plausible already in motion with some current specification updates clarifying terminology around Any, but that is further away from being possible now than is realistic to plan around.

At that point, we wouldn’t need the rule that float is transformed to float | int, we can just let it be detected as int, float, int | float, complex (etc) as appropriate in the absence of an annotation. If and when we reach that point, this rule should be revisited then if it still exists then, While that hypothetical change would also be breaking, the impact would be smaller if type checkers had to infer here instead rather than users supply a type.

In the short term, having a stand-in type for the runtime meaning of both float and complex, allowing them to be user denotable would be fine (option 4). I find this dissatisfying for what it does to user expectations and consistency around isinstance and type narrowing, but if the system is open to incremental change here and it isn’t just going to be something that can only be revisited every 7 years, making it denotable and using it in some places in the standard library typings is a good first step.

4 Likes

Apologies if that has already been hashed out somewhere in the flurry of replies: Under option 4, would the respective type of x = 1.0 and y = float(1) be the union of float and int, or strictly float as expressed by the new type?

I’ve already responded to you privately, but in case anyone else finds this interesting I’ve decided to leave a reply here as well:

I got indeed a little carried away with choosing AnyOf[float, int], so thanks for catching that this particular gradual spelling was too broad and would lead to problems. There is however still a gradual spelling that is much safer and still yields all the same benefits. Depending on the exact semantics of AnyOf this could be spelled as AnyOf[float, float | int], but a better analog is perhaps a constrained TypeVar where float and float | int are the only valid solutions/materializations. This way there is no bidirectional compatibility with int and we only allow bidirectional compatibility between float and float | int.

But I agree with you that it would be helpful to be more precise about floats in return types whichever option we choose and that we probably don’t need this gradual escape hatch as long as we don’t try to shoot straight for Option 1. But if someone feels very strongly about getting more precise numeric types sooner rather than later, this at least would give them a viable avenue to pursue, that is not as disruptive.

Offtopic ideas about AnyOf

This mistake has made me realize that perhaps AnyOf would be more useful as a collection of possible materializations, rather than an arbitrary union. This makes it a perfect analog to a constrained TypeVar and still leaves open the door for other more broad gradual type constructors like AnyUnionOf.

Another possibility would be to make AnyOf a constructor like TypeVar so we can also do AnyOf(bound=float) which would allow any materialization that is float or a subtype of float, so we have an analog for both bounded and constrained type vars.


The float special case is currently pure syntactic sugar, so from what I understand it only applies to annotation expressions, i.e. it does not change inference and runtime behavior. So it logically follows that literal floats would always be exactly float and not expanded to float | int, the same goes for when float is used as a constructor, since that is a runtime use and not an annotation use. So this would be true for any option we choose. So this discussion is more about how we can spell this type explicitly in an annotations vs. this type being inferred through assignments or control flow[1].


  1. i.e. mostly isinstance checks ↩︎

1 Like