Implement `precision` format spec for `int` type data

I know, but I think that’s fine. . is implemented for str data in a lossy truncating way

f"{'Hello World':.5}" # 'Hello'

Implementing z. in a lossy modular arithmetic way is how we want it. It’s like a signed char in C going 125, 126, 127, -128, -127, … round and round.

The most likely contexts in which z. is used is dealing with range(0, 256) or range(-128, 128) bytes, which z. consistently formats, the binary representations of signed char and unsigned char being the same. Any hypothetical problem arising from a user’s program printing 0b00000001 which the user interprets as 1 when the underlying integer is actually 257 sounds like a problem with a library, not a problem with the formatting (eg a poorly written bitmap library trying to write pixels with value 257).

Another context is when one purposely doesn’t care about the q part of q, r = divmod(x, base ** n). The formatting is a well defined bijection between the equivalence classes \mathbb{Z}/\text{base}^n \mathbb{Z} and formatted strings. I think this is what Raymond wanted.

Without the z the . precision should be the same as % formatting: the minimum number of digits (that is excluding 0{b,o,x} prefix, sign space, grouping separators etc), and negative numbers ie x = -y should be formatted as %-formatting does, y’s formatting with a negative sign.

With the z (which looks like a 2 if you squint hard enough :sweat_smile: ) that activates twos complements mode and takes x mod base ** n and formats that. It shouldn’t do the weird variable-length formatting

f"{200:z.8b}" # '011001000' is wrong
f"{200:z.8b}" # '11001000'  is right

Yeah I don’t think I’ve ever seen a 2s complement (10s complement?) rofl. I don’t think that would be useful to anyone. We would only implement z. for binary, octal, and hex.