I just came across the returns package. Its documentation states:
Noneis called the worst mistake in the history of Computer Science.
which went against my understanding of things. I always saw Python’s None and other languages’ null (C++, C#, Java, …) as totally different, and would like to sanity check my understanding. The “billion dollar mistake” quote is strictly about the null idea of things, not the None group language constructs, as far as I know. In that context, the quote would be mistaken, and I’d love to hear other opinions on this!
So, None in Python is mostly sane and safe. A guard of e.g. isinstance(some_obj, SomeType) will reliably protect against “null dereference”: once passed (isinstance is True), there’s a guarante the correct type is at hand. None is safely excluded, as it is entirely outside whatever type hierarchy we’re inside of:
x = 42
NoneType = type(None)
if not isinstance(x, NoneType):
pass
Dereferencing anything that is not NoneType is safe in the sense that no AttributeError on NoneType is raised, which seems closest in spirit to null dereference exceptions (but is totally different still, as Python doesn’t have a concept of null pointers). In the null family of languages, null is a valid value common to all reference (not value) types; so even after checking for “is some_obj an instance of SomeType”, some_obj might still blow up as null on dereference. This is not the case for Python (although it might blow up for other reasons, like accessing a non-existing member); e.g., in a properly typed-checked code base,
def set(x: SomeType) -> SomeType:
x.some_member = 42
return x
will never blow up for reasons of None, whereas similar constructs might very well in null languages (despite these usually being statically typed by nature already).
So how do you feel about the “billion dollar mistake” quote in the context of Python and its None? Is it applicable?