I’ll just cosign everything @oscarbenjamin and @mikeshardmind have said, but there are more issues with this proposal. Making float
mean something different as a value from as an annotation creates problems.
What should a type checker infer here? How will this be consistent in library code across typecheckers?
def convert_as[T](value: Any, type: type[T]) -> T:
# raises custom exceptions and does parsing safe for untrusted web input
...
convert_as(1, float).hex() # error?
here you have a case where this causes conflict between a runtime type as a value and a type expression.
For fun, I’ve also expanded the required overloads in Michael’s prior post.
@overload
def y(a: float & ~int, b: float & ~int) -> float & ~ int: ...
@overload
def y(a: float & ~int, b: float) -> float & ~ int: ...
@overload
def y(a: float, b: float & ~int) -> float & ~ int: ...
@overload
def y(a: int, b: int) -> int: ...
I think it’s clearly better and more friendly to work toward seperating these rather than entangling more complexly than they already are.
Here’s what it would look like with these meaning what they actually are at runtime:
@overload
def y(a: float , b: float) -> float: ...
@overload
def y(a: int, b: float) -> float: ...
@overload
def y(a: float, b: int) -> float: ...
@overload
def y(a: int, b: int) -> int: ...
In both cases, there’s implicit extra cases when either a or b might be “either a float or int” that the overload rules capture.