Would greatly appreciate explanation for below result:
x = 4
print ( ~x )
I understand that in binary, x = 1 0 0
So I thought: ~x = 0 1 1 (negation). Hence, 3. but why is the result -5?
Thank you.
Would greatly appreciate explanation for below result:
x = 4
print ( ~x )
I understand that in binary, x = 1 0 0
So I thought: ~x = 0 1 1 (negation). Hence, 3. but why is the result -5?
Thank you.
You are considering the lower 3 bits but there are lots more bits involved.
In 32bits ~4 = 0xfffffffb which is -5 as a signed quantity.
If you only want to work on 3 bits you can mask the result.
print(~4 & 7)
Here is the full problem:
x = 4
y = 1
a = x & y
b = x | y
c = ~x # tricky!
d = x ^ 5
e = x >> 2
f = x << 2
print(a, b, c, d, e, f)
However, this problem is in a Python total beginner’s material… I wonder why it was not really explained. Like why is it in 32 bits… this part i don’t understand much. perhaps i will have to get back to this at another time…
Thank you for the explanation.
When working with bits you must consider the number of bits that matter to you.
It may be 8, 16, 32, 64 or more.
In network code I have seen smaller sizes like 3 bits being important.
Also it is common for bit patterns to be printed as unsigned values, but python is always signed.
I seee. Now, I am wondering… For the rest of those operations above, I got them right without considering the number of bits… When do we consider the number of bits for a certain bitwise operation? (Sorry if my questions make no sense or what… totally beginner) Thank you!
If the high bits are all zeros you do not see them.
But if the high bits are ones then its becomes obvious that something is happening.
Python int
s are signed and have infinite range. This makes the unary inverse ~
a little surprising.
We have to trust the core devs that this is the least surprising it could be and still be correct. @barry-scott has given you the 32-bit version of this story. The clever bit is the extension to infinite integers.
The key is to understand that each positive integer, thought of as binary (which is how it is stored, of course) effectively has an infinite number of zeros stretching off to the left of what we naively think of as its simple value, and each negative integer has an infinite stretch of ones.
An infinite stretch of just ones would represent -1, since if you added one to it, the carry would turn them all to zero. (Mathematicians in the readership are now weeping.) To invert a number bit-wise is to subtract it from -1, which doesn’t involve any carries.
...11111111111111111111111111111111111111111 # -1
...00000000000000000000000000000000000000100 # 4
...11111111111111111111111111111111111111011 # ~4 = -1 - 4 = -5
And, or and xor will give the answers you expect with positive integers, and you can probably now work out what happens with negative ones.
An infinite stretch of 1 bits is equivalent to the infinite sum 2^0 + 2^1 + 2^2...
which can be evaluated as follows:
x = 2^0 + 2^1 + 2^2 + 2^3 + 2^4...
2x = 2 * 2^0 + 2*2^1 + 2*2^2 + 2*2^3 + 2*2^4...
2x = 2^1 + 2^2 + 2^3 + 2^4...
2x - x = -2^1 + (2^2 - 2^2) + (2^3 - 2^3)...
2x - x = -2^1
x = -1
Mathematically, the sum of all powers of two is, in fact, -1. Obviously this requires the understanding of “equality” that compasses the concept of limits, but it does mean that twos complement arithmetic is less nonsensical than you might think.
Note that the same logic with negative exponents will show that, in effect, .111111111111111111111111... = 1
which is the same as proving that in decimal, 0.999999... = 1
. This idea of limit-based arithmetic makes good sense and solves useful problems.
Oh don’t! Now even the engineers are getting queasy.
Maybe. But it does at least mean that the ideas behind twos-complement integer representation are less weird than they might seem at first.
But to be quite honest, I don’t see this as being at all a problem. There are other ways of defining distance that mean that, in a very real sense, adding more powers of two DOES get you progressively closer to negative one. As such, this sum actually converges. (And yes, this sort of distance metric fits all the normal expectations of distance.) The history of mathematics is full of ideas that start out as insane, then become better understood, and finally accepted. Zero? Negative numbers? Complex numbers? Algebra? Imagine a time before them, and introducing people to these concepts, which are taught now to young children as simple facts. How is algebra with infinite sums any different?
Mathologer in that video claims that these equivalences are false. But why? On the basis of partial sums? That doesn’t work - you can’t use partial sums on a divergent series. But when you take the entire infinite sum, it works correctly, just like how 0.999999999… is truly equal to 1, not infinitesimally smaller than it - which also doesn’t work with a partial sum.
Anyway, not really the point here, the only thing that matters is that, in computing, an infinite sequence of leading 1 bits is meaningful for negative numbers
Maybe if you use some non-standard definition. But saying “Mathematically, the sum of all powers of two is, in fact, -1”, i.e., that that’s “the” answer, when the standard definition disagrees, just isn’t right.
What do you mean it doesn’t work?
What’s the standard definition?
Take any partial sum from the sequence 9/10 + 9/100 + 9/1000… and it won’t be equal to 1, it’ll be a little bit smaller. No matter how far you go, it will always be smaller than 1. However, the infinite sum is NOT smaller than 1, not even by some infinitely-small margin. It is equal to 1.
Anyway, not really the point here, the only thing that matters is that, in computing, an infinite sequence of leading 1 bits is meaningful for negative numbers
This is probably the thing to watch.
However, since I caused this by casual mention of infinity, let me try to be more accurate than:
each positive integer, thought of as binary (which is how it is stored, of course) effectively has an infinite number of zeros stretching off to the left
More accurately, Python stores integers and does arithmetic in a number of bits M, which it chooses, including a sign bit. M is big enough that making it bigger would only add more zeros to the left (positive numbers) or ones to the left (negative numbers).
Bit-wise invert operations can be done in that field of M bits. Bit-wise binary operations can be done in a field big enough to store the larger of the two operands. Linear arithmetic (add, multiply, etc.) generally needs a larger field than either operand. The result might fit into fewer bits when stored.
What’s the standard definition?
I get the impression that you know :-). Limit of the partial sums, with the usual distance metric. You yourself said “There are other ways of defining distance”, presumably because you know that they’re not the standard one.
If you ask 100 mathematicians about that series, how many do you think will say “the sum is -1” and how many will say “diverges to infinity”?
And if you ask them about 9/10 + 9/100 + 9/1000…, how many will say “the sum is 1” and how many will say something else?
Yep, very familiar with 3blue1brown. (I even have one of his pi plushies beside me on the desk.) That was one of the examples I was thinking of when I said that there are other distance metrics, and that what diverges in one system can converge in another.
If you ask 100 mathematicians about that series, how many do you think will say “the sum is -1” and how many will say “diverges to infinity”?
Infinity isn’t a number. So I would be very curious whether any professional mathematicians say that it “diverges to infinity” and how many would instead say weasel words like “it has no value”, which is just as true of the square root of negative one if you deny complex numbers. Remember, we have multiple concepts that conflict, even within one programming language:
>>> math.sqrt(-1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: math domain error
>>> (-1) ** 0.5
(6.123233995736766e-17+1j)
so there are times when the correct answer is “there is no value” and there are times when the correct answer is “the value of sqrt(-1) is 1j”.
So there may be one definition by which this infinite sum “has no value” and another one by which it “has a value of -1”, but I don’t think “diverges to infinity” is useful here.
So I would be very curious whether any professional mathematicians say that it “diverges to infinity”
That’s verbatim from Series -- from Wolfram MathWorld
That’s verbatim from Series – from Wolfram MathWorld
Is that written by professional mathematicians or is it a wiki? Never really sure.
Mathematicians in the readership are now weeping.
We are fine. We know about inverse limits.
divergent series
To be calling them divergent, a notion of convergence (a topology) needs to be introduced.
The two of you are correct, but are talking about different such notions.