Let Python just not reveal that useless error. In real life X/0 will always give 0.
Thatās so completely untrue that itās hard to even begin to answer. No, dividing by zero does not give zero.
I donāt really see much point in this, maybe we could send a warning?
It is conventional that 0/0 is always 0. Imagine a child is learning to divide with Python and this error appears.
Anyway X / 0 will always give an error in Python, I think your point makes no sense.
Iām not going to cast aspersions on the quality of your education, but it does seem to disagree with the rest of the world:
Anyway X / 0 will always raise an error in Python, I think your point is nonsense. My brain doesnāt raise an error when Iām dividing a variable by zero.
Then youāre welcome to write MyBrainPython that behaves the way your brain does. Iāll keep using the Python that follows the straight-forward rule that division by zero canāt be done.
There are ways to give a meaningful value to division by zero (such as limits), but the result is seldom zero.
Not even C++ throws an error when Iām dividing by 0, it doesnāt seem correct to me.
My mother is a math teacher, I have been programming since I was 6 years old and I have never seen such a silly mistake: 1 / 0 is always 0 as 0 x 1 is always 0, you canāt divide nothing by nothing.
Infinity is like God, itās there but Iāve never seen it.
Maybe it would be nice to return a variable representing ānanā that has the value 0. Like some kind of abstraction.
Python does simply what most languages do and none of the languages I tested return 0:
R, JavaScript: -1/0 = -Infinity
, 0/0 = NaN
, 1/0 = Infinity
Python, C, C++, C#, Java, Go, PHP, Swift, Rust: -1/0, 0/0, 1/0 -> error
Suggesting to return -inf
, nan
or inf
would have a bigger chance at success, but probably still wonāt happen.
Given that Python now does give float results for integer division itād make sure sense to extend that to returning NaN or infinity. But Iād be surprised if it happened
Which programming language is that? The one where 1/0 = 0?
That is because the programmer is expected to check for zero before attempting the divide.
This is an optimisation according to the designers of C++.
Iām curious what your C++ compiler does. Mine (GCC on Linux with default settings) terminates with SIGFPE, which seems pretty appropriate.
Seems to be undefined behaviour except if the numbers are IEEE floats:
- The result of built-in division is lhs divided by rhs. If rhs is zero, the behavior is undefined.
If both operands have an integral type, the result is the algebraic quotient (performs integer division): the quotient is truncated towards zero (fractional part is discarded).
If both operands have a floating-point type, and the type supports IEEE floating-point arithmetic (see std::numeric_limits::is_iec559):
- If one operand is NaN, the result is NaN.
- Dividing a non-zero number by Ā±0.0 gives the correctly-signed infinity and FE_DIVBYZERO is raised.
- Dividing 0.0 by 0.0 gives NaN and FE_INVALID is raised.
I web searched to see what C++ defined and I hit SO topics on how to raise exceptions. That explained that by default there are no exceptions from the C++ language definition.
SIGFPE is a CPU exception coming from the kernel into user space.
For this code:
#include <cstdio>
int main() {
float a = 19.3;
float b = 0.0;
printf("%f\n", a/b);
return 0;
}
It prints āinfā for llvm on macOS, g++ on Fedora and also for Microsoft Visual C++.
For this int version:
#include <cstdio>
int div(int a, int b) {
printf("result: %d\n", a/b);
return a/b;
}
int main() {
int a = 19;
int b = 0;
div(a, b);
return 0;
}
Microsoft Visual C++ exits, not output, not error message.
g++ report āfloating point expectionā
macOS llvm prints ā0ā.
Thatās a surprising, to me, set of results.
In what real life situations do you divide by 0?
Small tip: If I want to ensure the compiler wonāt optimize things away, I make use of argc in the expression.
#include <iostream>
int main(int argc, char **argv) {
std::cout << 24 / (argc - 1) << std::endl;
}
This also gave me a floating point exception in G++, but I donāt have MSVC to test on, so I donāt know if your āno output, no errorā was the result of the compiler optimizing away the actual calculation.
First, it is a logical error. If a programmer mistakenly writes code that divides by zero, it suggests they havenāt properly accounted for cases where the divisor could be zero.
Second, division by zero is undefined in mathematics, and depending on the field or context, various interpretations or conventions are applied to handle it. It could be 0, ā, -ā, or still undefined.
So, being both a logical error and undefined, it must be handled according to the field or context. To avoid hard-to-debug bugs, the best approach is to raise a ZeroDivisionError
exception.