I stumbled across this behavior and wondered if it was done for a reason. When trying out underscores in hex strings, I would have expected the underscore to NOT be included in the width of the numeric literal, but it is:
# 32-bit hex value, 0x0ee1_f00d
# prints 0x0EE1F00D (0x plus 8 digits; great)
# prints 0xEE1_F00D (0x plus 7 digits plus _)
# prints 0x0EE1_F00D (0x plus 8 digits plus _)
This problem is going to compound given that I also have to deal with 64-bit values, 48-bit values, 24-bit values, etc.
Thanks in advance for any help/explanations.
As the field width is often used for column alignment and formatting, I would think it odd to not include any character in the width. Rather than “how many digits should be printed?”, it’s “how wide should this field be in the output?”.
Underscores in numeric literals are irrelevant. There is no difference to Python whether you write 0x0ee1f00d or 0x0_e_e_1_f_0_0_d or 249688077, they are all the same number.
The format width field is the minimum width of the string, and it includes everything in the string: spaces, zeroes, decimal points (for floats), leading negative sign, everything. Why should underscores be different?
I don’t understand what “problem” you are experiencing.
What is the width you want the string to have? If it is eight hex digits plus an underscore, that makes 9 so your minimum width is 9. Where is the problem?
(Remember that if the string is larger than the minimum width, it will not be truncated.)
From string — Common string operations — Python 3.11.0 documentation
" width is a decimal integer defining the minimum total field width, including any prefixes, separators, and other formatting characters."
0xEEE1_F00D # 8 minimum, 9 needed