Some popular libraries, including gevent, disable subnormals in floating point maths, which apparently disables them for the entire Python process when imported.
I can see from the bug report that this is Bad, as many floating point algorithms rely on subnormals or they will fail to converge, But I’m not quite sure how bad it is.
I presume that it’s not serious enough to deal with it in the interpreter, we can just let the various libraries get around to dealing with it in their own time. Yes?
Or maybe the interpreter should do something about this – but what? Is it worse for the interpreter to mess with CPU flags than for libraries to do it?
In the meantime, are these guaranteed to correctly check for the existence of subnormals?
assert math.nextafter(0.0, 1.0) != 0.0 # Python 3.9 and higher. assert sys.float_info.min/2 != 0.0 # older versions
Thanks in advance.