Would greatly appreciate explanation for below result:

```
x = 4
print ( ~x )
```

I understand that in binary, x = 1 0 0

So I thought: ~x = 0 1 1 (negation). Hence, 3. but why is the result -5?

Thank you.

Would greatly appreciate explanation for below result:

```
x = 4
print ( ~x )
```

I understand that in binary, x = 1 0 0

So I thought: ~x = 0 1 1 (negation). Hence, 3. but why is the result -5?

Thank you.

You are considering the lower 3 bits but there are lots more bits involved.

In 32bits ~4 = 0xfffffffb which is -5 as a signed quantity.

If you only want to work on 3 bits you can mask the result.

print(~4 & 7)

Here is the full problem:

```
x = 4
y = 1
a = x & y
b = x | y
c = ~x # tricky!
d = x ^ 5
e = x >> 2
f = x << 2
print(a, b, c, d, e, f)
```

However, this problem is in a Python total beginnerâ€™s materialâ€¦ I wonder why it was not really explained. Like why is it in 32 bitsâ€¦ this part i donâ€™t understand much. perhaps i will have to get back to this at another timeâ€¦

Thank you for the explanation.

When working with bits you must consider the number of bits that matter to you.

It may be 8, 16, 32, 64 or more.

In network code I have seen smaller sizes like 3 bits being important.

Also it is common for bit patterns to be printed as unsigned values, but python is always signed.

I seee. Now, I am wonderingâ€¦ For the rest of those operations above, I got them right without considering the number of bitsâ€¦ When do we consider the number of bits for a certain bitwise operation? (Sorry if my questions make no sense or whatâ€¦ totally beginner) Thank you!

If the high bits are all zeros you do not see them.

But if the high bits are ones then its becomes obvious that something is happening.

Python `int`

s are signed and have infinite range. This makes the unary inverse `~`

a little surprising.

We have to trust the core devs that this is the *least surprising* it could be and still be correct. @barry-scott has given you the 32-bit version of this story. The clever bit is the extension to infinite integers.

The key is to understand that each positive integer, thought of as binary (which is how it is stored, of course) effectively has an infinite number of zeros stretching off to the left of what we naively think of as its simple value, and each negative integer has an infinite stretch of ones.

An infinite stretch of just ones would represent -1, since if you added one to it, the carry would turn them all to zero. (Mathematicians in the readership are now weeping.) To invert a number bit-wise is to subtract it from -1, which doesnâ€™t involve any carries.

```
...11111111111111111111111111111111111111111 # -1
...00000000000000000000000000000000000000100 # 4
...11111111111111111111111111111111111111011 # ~4 = -1 - 4 = -5
```

And, or and xor will give the answers you expect with positive integers, and you can probably now work out what happens with negative ones.

4 Likes

An infinite stretch of 1 bits is equivalent to the infinite sum `2^0 + 2^1 + 2^2...`

which can be evaluated as follows:

`x = 2^0 + 2^1 + 2^2 + 2^3 + 2^4...`

`2x = 2 * 2^0 + 2*2^1 + 2*2^2 + 2*2^3 + 2*2^4...`

- Multiplying by 2 is adding 1 to the exponent. So
`2x = 2^1 + 2^2 + 2^3 + 2^4...`

- Thatâ€™s almost the same as our original number. Subtract.
`2x - x = -2^1 + (2^2 - 2^2) + (2^3 - 2^3)...`

- Simplify:
`2x - x = -2^1`

`x = -1`

Mathematically, the sum of all powers of two is, in fact, -1. Obviously this requires the understanding of â€śequalityâ€ť that compasses the concept of limits, but it does mean that twos complement arithmetic is less nonsensical than you might think.

Note that the same logic with negative exponents will show that, in effect, `.111111111111111111111111... = 1`

which is the same as proving that in decimal, `0.999999... = 1`

. This idea of limit-based arithmetic makes good sense and solves useful problems.

1 Like

Oh donâ€™t! Now even the engineers are getting queasy.

1 Like

Maybe. But it does at least mean that the ideas behind twos-complement integer representation are less weird than they might seem at first.

But to be quite honest, I donâ€™t see this as being at all a problem. There are other ways of defining distance that mean that, in a very real sense, adding more powers of two DOES get you progressively closer to negative one. As such, this sum actually converges. (And yes, this sort of distance metric fits all the normal expectations of distance.) The history of mathematics is full of ideas that start out as insane, then become better understood, and finally accepted. Zero? Negative numbers? Complex numbers? Algebra? Imagine a time before them, and introducing people to these concepts, which are taught now to young children as simple facts. How is algebra with infinite sums any different?

Mathologer in that video claims that these equivalences are false. But why? On the basis of partial sums? That doesnâ€™t work - you canâ€™t use partial sums on a divergent series. But when you take the entire infinite sum, it works correctly, just like how 0.999999999â€¦ is truly equal to 1, not infinitesimally smaller than it - which also doesnâ€™t work with a partial sum.

Anyway, not really the point here, the only thing that matters is that, **in computing**, an infinite sequence of leading 1 bits is meaningful for negative numbers

Maybe if you use some non-standard definition. But saying *â€śMathematically, the sum of all powers of two is, in fact, -1â€ť*, i.e., that thatâ€™s â€śtheâ€ť answer, when the standard definition disagrees, just isnâ€™t right.

What do you mean it doesnâ€™t work?

Whatâ€™s the standard definition?

Take any partial sum from the sequence 9/10 + 9/100 + 9/1000â€¦ and it wonâ€™t be equal to 1, itâ€™ll be a little bit smaller. No matter how far you go, it will always be smaller than 1. However, the infinite sum is NOT smaller than 1, not even by some infinitely-small margin. It is *equal* to 1.

Chris Angelico:

Anyway, not really the point here, the only thing that matters is that,

in computing, an infinite sequence of leading 1 bits is meaningful for negative numbers

This is probably the thing to watch.

However, since I caused this by casual mention of infinity, let me try to be more accurate than:

Jeff Allen:

each positive integer, thought of as binary (which is how it is stored, of course) effectively has an infinite number of zeros stretching off to the left

More accurately, Python stores integers and does arithmetic in a number of bits M, which it chooses, including a sign bit. M is big enough that making it bigger would only add more zeros to the left (positive numbers) or ones to the left (negative numbers).

Bit-wise invert operations can be done in that field of M bits. Bit-wise binary operations can be done in a field big enough to store the larger of the two operands. Linear arithmetic (add, multiply, etc.) generally needs a larger field than either operand. The result might fit into fewer bits when stored.

1 Like

Chris Angelico:

Whatâ€™s the standard definition?

I get the impression that you know :-). Limit of the partial sums, with the usual distance metric. You yourself said *â€śThere are other ways of defining distanceâ€ť*, presumably because you know that theyâ€™re not the standard one.

If you ask 100 mathematicians about that series, how many do you think will say â€śthe sum is -1â€ť and how many will say â€śdiverges to infinityâ€ť?

And if you ask them about 9/10 + 9/100 + 9/1000â€¦, how many will say â€śthe sum is 1â€ť and how many will say something else?

Jeff Allen:

Yep, very familiar with 3blue1brown. (I even have one of his pi plushies beside me on the desk.) That was one of the examples I was thinking of when I said that there are other distance metrics, and that what diverges in one system can converge in another.

Stefan Pochmann:

If you ask 100 mathematicians about that series, how many do you think will say â€śthe sum is -1â€ť and how many will say â€śdiverges to infinityâ€ť?

Infinity isnâ€™t a number. So I would be very curious whether any professional mathematicians say that it â€śdiverges to infinityâ€ť and how many would instead say weasel words like â€śit has no valueâ€ť, which is just as true of the square root of negative one if you deny complex numbers. Remember, we have multiple concepts that conflict, even within one programming language:

```
>>> math.sqrt(-1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: math domain error
>>> (-1) ** 0.5
(6.123233995736766e-17+1j)
```

so there are times when the correct answer is â€śthere is no valueâ€ť and there are times when the correct answer is â€śthe value of sqrt(-1) is 1jâ€ť.

So there may be one definition by which this infinite sum â€śhas no valueâ€ť and another one by which it â€śhas a value of -1â€ť, but I donâ€™t think â€śdiverges to infinityâ€ť is useful here.

Chris Angelico:

So I would be very curious whether any professional mathematicians say that it â€śdiverges to infinityâ€ť

Thatâ€™s verbatim from Series -- from Wolfram MathWorld

Stefan Pochmann:

Thatâ€™s verbatim from Series â€“ from Wolfram MathWorld

Is that written by professional mathematicians or is it a wiki? Never really sure.

Jeff Allen:

Mathematicians in the readership are now weeping.

We are fine. We know about inverse limits.

Stefan Pochmann:

divergent series

To be calling them divergent, a notion of convergence (a topology) needs to be introduced.

The two of you are correct, but are talking about different such notions.