Above someone said that there are issues with doing this with py.typed so I don’t know if that can work. That level of granularity would be good but I see that as a convenience for library authors whereas a per-file flag would still be needed to handle all cases. If you write a standalone script/notebook that uses numpy etc then you could use # type: strict_float
there even though you don’t have a py.typed file. It could also be useful for a library to migrate stubs incrementally by adding the directive one file at a time.
It is a bad idea but the problem is the special case rather than strict_float
. The special case makes typing harder to use, understand and learn. The bad idea is that float does not mean float. Having StrictFloat
is still confusing because people still have to learn the difference between float
and StrictFloat
and forever deal with float
being an ambiguous annotation.
This would be good but I think that it does not really work the way that people expect. The problem with the numeric tower is that it supposes that you just have some object x
of say Real but does not say how you can do useful operations with that object like compute sin(x)
or create a 2
of the same type as x
so you can’t really use it to do anything more useful than just converting x
to a float. It is quite clear that the numeric tower was designed by people who are more interested in class hierarchies than in writing numeric code with different types.
The Real type is just defined as having the operators that make an ordered field but meaningfully working with a generic field even without functions like sin
and cos
means needing to be able to do basic things like create a 1
or 0
of the correct type. This for example is a bug inherited from the Real ABC:
>>> from fractions import Fraction
>>> f = Fraction(0)
>>> (f.real + 1) / 3
Fraction(1, 3)
>>> (f.imag + 1) / 3
0.3333333333333333
The bug is writing return 0
in a situation where a different type of zero should be returned but Real does not know how to create anything of the correct type.
What the numeric tower provides that are useful are the methods __complex__
, real
, imag
, __float__
, numerator
, denominator
and __index__
. They are useful not because they allow you to work with a given type but because they allow you deconstruct and convert to known types that you can work with. If you wanted to work with the original types you would also need constructors to do things like make a Complex from two Reals or a Rational from two Integrals or create any type from an int but the numeric tower has no constructors.
If you actually want to work with the given Real type then you need to have functions like sin
and cos
that work with the type. In Julia those functions use multiple dispatch so if you have x
you can compute sin(x)
if it is defined for the type. In Python the way this works is different. It is not that the type x
knows operations like sin
and cos
but rather that you have an object with a set of functions that work with a given type and will coerce inputs to that type like:
def sin(x: T | int) -> T: ...
Generic code here needs the domain/namespace/context object that holds these functions. Typing this generically is complicated but it would be something like:
from typing import Self, Protocol
class EField(Protocol):
def __add__(self, other: Self | int, /) -> Self: ...
def __pow__(self, other: int, /) -> Self: ...
...
class RealFuncs[E](Protocol):
def sin(self, x: E | int, /) -> E: ...
def cos(self, x: E | int, /) -> E: ...
...
def generic_code[E: EField](D: RealFuncs[E], x: E):
return D.sin(x)**2 + D.cos(x)**2
import math
a = generic_code(math, 1)
import cmath
b = generic_code(cmath, 1)
import numpy as np
c = generic_code(np, 1)
import mpmath
d = generic_code(mpmath, 1)
ctx = mpmath.MPContext()
ctx.dps = 50
e = generic_code(ctx, 1)
import gmpy2
f = generic_code(gmpy2, 1)
for v in [a, b, c, d, e, f]:
print(v, type(v))
It is the domain object D
that makes this work with its functions sin
and cos
. The other argument to generic code is always 1
but in the same way that math.cos
can accept int all of these functions can coerce int as well. It is dispatching on the domain object D
rather than the type of the argument x
that makes it possible to work with different types and still be able to pass an int for x
.
These functions will coerce many more things than just int so the signatures are generally like:
def sin(x: Coercible[T]) -> T
It is hard to define the Coercible[T]
type in a generic way though beyond writing something like T | int
.
The array API defines a way to get the domain object from an array:
>>> import numpy as np
>>> a = np.array([1])
>>> D = a.__array_namespace__()
>>> D
<module 'numpy' >
>>> D.sin(a)
array([0.84147098])
Using __array_namespace__
works like multiple dispatch so it only works if you are very strict about the types so that an array is definitely an array of the expected type rather than something like 1
or [1, 2]
that could be coerced by np.sin
. Dispatching on the type of a
as this does is very different from the model of typing where you can expect to be able to pass an int
in place of the proper numeric type that you want to work with.