PEP 686: Make UTF-8 mode default (Round 2)

I updated the PEP 686 based on discussions in the previous topic.

Major changes from the previous version:

  • Target Python version changed from 3.12 to 3.13.
  • Add locale.getencoding() instead of changing locale.getpreferredencoding(False) to ignore UTF-8 mode.
    • locale.getpreferredencoding() will emit EncodingWarning because UTF-8 mode affects it (opt-in).
  • Add section about encoding="locale" behavior.
  • Add “Use PYTHONSTDIOENCODING for PIPEs” section to rejected ideas.

Abstract

This PEP proposes enabling UTF-8 mode by default.

With this change, Python consistently uses UTF-8 for default encoding of files, stdio, and pipes.

Motivation

UTF-8 becomes de facto standard text encoding.

  • The default encoding of Python source files is UTF-8.
  • JSON, TOML, YAML use UTF-8.
  • Most text editors, including Visual Studio Code and Windows Notepad use UTF-8 by default.
  • Most websites and text data on the internet use UTF-8.
  • And many other popular programming languages, including Node.js, Go, Rust, and Java uses UTF-8 by default.

Changing the default encoding to UTF-8 makes it easier for Python to interoperate with them.

Additionally, many Python developers using Unix forget that the default encoding is platform dependent. They omit to specify encoding="utf-8" when they read text files encoded in UTF-8 (e.g. JSON, TOML, Markdown, and Python source files). Inconsistent default encoding causes many bugs.

Specification

Enable UTF-8 mode by default

Python will enable UTF-8 mode by default from Python 3.13.

Users can still disable UTF-8 mode by setting PYTHONUTF8=0 or -X utf8=0 .

locale.getencoding()

Since UTF-8 mode affects locale.getpreferredencoding(False) , we need an API to get locale encoding regardless of UTF-8 mode.

locale.getencoding() will be added for this purpose. It returns locale encoding too, but ignores UTF-8 mode.

When warn_default_encoding option is specified, locale.getpreferredencoding() will emit EncodingWarning like open() (see also PEP 597).

This API will be added in Python 3.11.

Fixing encoding="locale" option

PEP 597 added the encoding="locale" option to the TextIOWrapper . This option is used to specify the locale encoding explicitly. TextIOWrapper should use locale encoding when the option is specified, regardless of default text encoding.

But TextIOWrapper uses "UTF-8" in UTF-8 mode even if encoding="locale" is specified for now. This behavior is inconsistent with the PEP 597 motivation. It is because we didn’t expect making UTF-8 mode default when Python changes its default text encoding.

This inconsistency should be fixed before making UTF-8 mode default. TextIOWrapper should use locale encoding when encoding="locale" is passed even in UTF-8 mode.

This issue will be fixed in Python 3.11.

Backward Compatibility

Most Unix systems use UTF-8 locale and Python enables UTF-8 mode when its locale is C or POSIX. So this change mostly affects Windows users.

When a Python program depends on the default encoding, this change may cause UnicodeError , mojibake, or even silent data corruption. So this change should be announced loudly.

This is the guideline to fix this backward compatibility issue:

  1. Disable UTF-8 mode.
  2. Use EncodingWarning (PEP 597) to find every places UTF-8 mode affects.
    • If encoding option is omitted, consider using encoding="utf-8" or encoding="locale" .
    • If locale.getpreferredencoding() is used, consider using "utf-8" or locale.getencoding() .
  3. Test the application with UTF-8 mode.

Preceding examples

  • Ruby changed the default external_encoding to UTF-8 on Windows in Ruby 3.0 (2020).
  • Java changed the default text encoding to UTF-8 in JDK 18. (2022).

Both Ruby and Java have an option for backward compatibility. They don’t provide any warning like PEP 597’s EncodingWarning in Python for use of the default encoding.

Rejected Alternative

Deprecate implicit encoding

Deprecating the use of the default encoding is considered.

But there are many cases that the default encoding is used for reading/writing only ASCII text. Additionally, such warnings are not useful for non-cross platform applications run on Unix.

So forcing users to specify the encoding everywhere is too painful. Emitting a lot of DeprecationWarning will lead users ignore warnings.

PEP 387 requires adding a warning for backward incompatible changes. But it doesn’t require using DeprecationWarning . So using optional EncodingWarning doesn’t violate the PEP 387.

Java also rejected this idea in JEP 400.

Use PYTHONIOENCODING for PIPEs

To ease backward compatibility issue, using PYTHONIOENCODING as the default encoding of PIPEs in the subprocess module is considered.

With this idea, users can use legacy encoding for subprocess.Popen(text=True) even in UTF-8 mode.

But this idea makes “default encoding” complicated. And this idea is also backward incompatible.

So this idea is rejected. Users can disable UTF-8 mode until they replace text=True with encoding="utf-8" or encoding="locale" .

How to teach this

For new users, this change reduces things that need to teach. Users don’t need to learn about text encoding in their first year. They should learn it when they need to use non-UTF-8 text files.

For existing users, see the Backward compatibility section.

10 Likes

We should add the new function immediately (3.11) and mention that here. That will give library authors more time to adapt.

I think we can also fix encoding="locale" immediately to use getencoding() and avoid UTF-8 mode.

Other than that, I think I’m happy enough with this to start promoting it around “regular” users and see what new concerns are raised.

1 Like

OK, I will add schedule to each spec.

Thanks!

FWIW, I found “Inside Java” article written by the JEP400 author.

If the Python UTF-8 mode has issues, I agree with @steve.dower, it should be fixed right now. I’m not sure that these bugfixes need a PEP.

I’m fine with adding a new locale.getencoding() function which ignores the UTF-8 Mode.

But changing locale.getpreferredencoding(False) to ignore the UTF-8 Mode can cause mojibake on code expecting the Python 3.10 behavior or who didn’t know the UTF-8 Mode when writing their application. PEP 540 changes locale.getpreferredencoding(False) so existing applications (calling this function) switch automatically to UTF-8 if the UTF-8 Mode is enabled.

It’s unfortunate that the UTF-8 Mode… which ignores the locale on purpose… affects a function of the locale module :frowning:

This problem reminds me the old time.clock() function which had a different behavior (monotonic or not) depending on the platform. After long discussions, the function was deprecated and then removed, to be replaced with time.perf_counter() and time.monotonic() which behave the same on all platforms.

I propose a similar migration plan:

  • Deprecate locale.getpreferredencoding() and locale.getpreferredencoding(False)
  • Add locale.getencoding() which ignores UTF-8 Mode
  • Add sys.getencoding() which is similar to locale.getpreferredencoding(False)

locale.getencoding() and sys.getencoding() should read the locale encoding at Python startup, and then always return the same encoding. Otherwise, we create another source of mojibake. “Changing the encoding” at runtime is always a bad idea. IMO using the same encoding everywhere (command line, filenames, environment variables, subprocess pipes, stdio) reduces greatly the risk of mojibake.

Deprecating locale.getpreferredencoding() can happen later, or maybe remain part of PEP 686.


The very few special functions which require the current LC_CTYPE locale encoding can call the existing locale.nl_langinfo(locale.CODESET) function. Example:

$ ./python -q
>>> import locale
>>> locale.nl_langinfo(locale.CODESET)
'UTF-8'
>>> locale.setlocale(locale.LC_CTYPE, "fr_FR")
'fr_FR'
>>> locale.nl_langinfo(locale.CODESET)
'ISO-8859-1'

Sadly, this function is not portable on Windows where locale.getencoding() should be used instead to the ANSI code page. But I expect that functions which care about the current LC_CTYPE encoding are really specific to Unix (ex: ncurses).


If these APIs are fixed in Python 3.11, PEP 686 can be greatly simplified to just enable the UTF-8 Mode by default in your favorite Python version :wink:

3 Likes

Strictly speaking, this is not an issue until this PEP is accepted.
If SC reject this PEP and we decide to change the default TextIOWrapper encoding without UTF-8 mode, current behavior has no problem.
That’s why I include this issue in the PEP.

On the other hand, I think we can change the encoding="locale" behavior in Python 3.11 without waiting this PEP accepted.
It is very small behavior change so almost zero users will be affected.

But we need locale.getencoding() to fix this issue in _pyio anyway.

For clarify, previous version of this PEP had proposed to change the locale.getpreferredencoding(False) to ignore UTF-8 mode.
But current version (written in this topic) doesn’t propose it:

  • Add locale.getencoding() which is same to locale.getpreferredencoding(False) except it ignores UTF-8 mode.
  • locale.getpreferredencoding() emits EncodingWarning if warn_default_encoding is specified, so that user can know where UTF-8 mode affects.

I want to make encoding="locale" behaves same to current encoding=None as possible to ease migration.
Current encoding=None and locale.getpreferredencoding(False) returns current locale encoding. So I proposed locale.getencoding() returns current locale too.

Many users might need to keep current behavior until major version up of their applications.
I really want to provide API that is same to locale.getpreferredencoding(False) but ignores UTF-8 mode.

On the other hand, I don’t think we need sys.getencoding().
We already have sys.getdefaultencoding() and sys.getfilesystemencoding(). sys.getencoding() will be too confusiong.
If we need “locale encoding at Python startup”, it can be sys.getlocaleencoding() and it will ignore UTF-8 mode.

I don’t see why we’d need it, and certainly not anywhere so obvious as the sys module.

How do you get the encoding at startup in other languages? Presumably by reading the encoding before you change it, and the same can be done in Python. If someone is injecting an encoding change even earlier than that, it’s probably meant to override whatever you’re supposed to think the original encoding was… this game goes on for a long time :slight_smile:

Being able to read the current encoding from locale is good enough until proven otherwise, and I’d be willing to bet that whoever thinks they can prove otherwise is still probably wrong.

1 Like

For the record, R 4.2.0 was released at April 2022 also adopt UTF-8 on Windows.

It seems they uses UTF-8 code page by manifest.

As far as I know, there are neither backward compat option like PYTHONUTF8=0 nor warning like EncodingWarning.
They just bit the bullet.

3 Likes

Hi,
As Thomas said in the other thread:

we would like the change to the default to be for Python 3.15 instead of 3.13. Unless you have strong objections to the longer timeline, please update the PEP accordingly, and consider it accepted.

The PEP was updated, so please consider it accepted. You can mark it as such at your convenience.

– Petr, on behalf of the SC

8 Likes

Brett Cannon said in the other thread:

this could lead to very odd data issues if you’re reading something in a different encoding that happens to be compatible enough with UTF-8

I know that ASCII-only UTF-{16,32}-{BE,LE} can be decoded as UTF-8, resulting in ASCII interspersed with null bytes. Any other examples of encodings which can accidentally be successfully decoded as UTF-8, leading to silent data corruption rather than a decoding error? Especially among the Windows code pages? Please share, I’m genuinely interested!

Also: PEP 538 mentions that enabling UTF-8 mode can lead to encoding inconsistencies between the Python core and locale-aware extensions. Can this mismatch have any negative impact on those using non-UTF-8 locales once UTF-8 mode becomes the default?

Of course.

>>> s = "abc"
>>> u16 = s.encode('utf-16-le')
>>> u16
b'a\x00b\x00c\x00'
>>> u16.decode('ascii')
'a\x00b\x00c\x00'
>>> u16.decode('cp932')
'a\x00b\x00c\x00'
>>> u16.decode('utf-8')
'a\x00b\x00c\x00'
>>> u16.decode('latin1')
'a\x00b\x00c\x00'

UTF-16 is used Windows APIs. But it is not locale encoding.
Windows uses UTF-8 or legacy encodings like cp932, latin1 for the locale encoding.

It can. But using legacy encoding by default has bigger negative impact on Windows.

You can start trying UTF-8 mode from Python 3.11. (Python 3.7~3.10 also have UTF-8 mode but there are limitations because they are before PEP 538.) So you have enough time to prepare UTF-8 mode.

And after UTF-8 mode become default, you can disable UTF-8 mode too. If you find problem with UTF-8 mode, you can just disable UTF-8 mode.

PEP 538 is breaking change. But it is not like Python 2/3 problem.

1 Like

Sorry, I probably didn’t phrase my question quite clearly enough. These are all examples showing how UTF-16-LE-encoded bytes can be successfully decoded using a variety of encodings, which is not all that surprising in the case of single-byte fixed-width encodings, since for many of them, any bytes in any order are valid.

What I meant was: what are other examples of encodings I could pass to the .encode() call in place of UTF-16-LE, that would create a sequence of bytes that when decoded specifically using UTF-8 (not other encodings) yields corrupt data instead of triggering a decoding error?

Having slept on it, I now see that the answer is trivially many of them, certainly at least all of the single-byte ones. It’s just that real-world text in a legacy single-byte encoding using bytes beyond the ASCII range is unlikely to have all of those non-ASCII bytes placed in just such a way that also happens to be valid UTF-8.

Funnily enough, it occurs to me that a “genre” which is at high risk of data corruption in this context is English prose about mojibake :slight_smile:

>>> intended_text = """
... The tell-tale mark of UTF-8 decoded with a legacy encoding
... is accented letters turning into sequences of multiple
... characters, e.g. café.""".strip()
>>> print(intended_text.encode("latin1").decode("utf-8"))
The tell-tale mark of UTF-8 decoded with a legacy encoding
is accented characters turning into sequences of multiple
characters, e.g. café.
#                   ^ OOPS!

So to rephrase: any examples which often tend to be a problem in practice? (I.e. real-world text in that encoding often decodes as UTF-8 without errors and results in data corruption?) E.g. are there any single-byte non-ASCII-compatible encodings which make heavy use of the ASCII byte range?

So you have enough time to prepare UTF-8 mode.

I should clarify I’m not personally affected, my platforms of choice are Linux and macOS, WSL in a pinch. But I regularly teach text processing in Python to beginners, who often use Windows, so I try to keep abreast of the quirks and discrepancies between the platforms.

What I had in mind is the sort of thing mentioned in PEP 538 – broken history editing because readline expects a different encoding than Python. Will that be an issue if UTF-8 mode becomes the default?

Are we talking about the same PEP here? I thought PEP 538 was accepted for 3.7, not 3.11… At least that’s what the PEP header says?

I don’t know much about it.
Most text on the internet is already UTF-8. Notepad and VSCode uses UTF-8 by default.
So non-UTF-8 text is already rare even on Windows, except legacy enterprise systems.

Then, please recommend UTF-8 mode to your students.
Python’s current behavior (using legacy encoding by default) is pitfall. PEP 686 solves many encoding issues more than it cause.

Sorry, it’s my mistake. s/PEP 538/PEP 686/.

2 Likes

I would not necessarily sign that last sentence :slight_smile: In the Latin-1 world I live in, we still have a lot of Latin-1 text around in text files and databases. Latin-1 can be decoded as UTF-8 if you just use ASCII characters (just like many other 8-bit encodings). You get an exception when the decoder hits a non-ASCII character, e.g. accented chars or the German “ß” – which is fine, since you then know that something isn’t quite right.

In practice, you often run into different problems though:

  • strings encoded as UTF-8 which are then read in again as Latin-1 by some other application, e.g. 'André'.encode('utf-8').decode('latin-1').encode('utf-8') ... André
  • the same, only more than once, e.g. André
  • often enough, the result of such conversions is then stored as UTF-8, making it hard to detect such errors in data
  • on Windows, you can also get variants of the above with cp1252: print('außen'.encode('utf-8').decode('latin-1')) ... auÃen vs. print('außen'.encode('utf-8').decode('cp1252')) ... außen
  • mixes of the two encodings latin-1 and cp1252; and again, the results stored as UTF-8

Even though cp1252 is similar to Latin-1, there are differences.

Things are getting better with the adoption of UTF-8, but there’s still a lot of code out there using native encodings and even more data stored using native encodings in databases. As a result, names in addresses using non-ASCII letters are often misspelled.

As for adoption and use in webpages, see:

All that said, making UTF-8 the default will allow more of the above problems to show up via exceptions, so that things can get corrected and not get ignored and stored silently.

1 Like

Yes, please do this @dlukes.

One of my conditions (as much as I can apply it) is that the UTF-8 mode be evangelised widely before the default is changed. We need people to try their use cases with the option turned on, test their configuration files and user-specified inputs without/with the option and develop migration strategies.

I admit I haven’t been paying attention to the places where it may come up, especially since I haven’t bothered switching social media networks recently, but there are a few relevant issues on GitHub that suggests people are at least aware of it. Hopefully they’ll be resolved before we consider flipping the switch.

3 Likes

Unfortunately we can’t rely on “unlikely”. :wink: People do “interesting” things all the time and so we can’t make assumptions about what is unlikely here. Since we are talking about potential data corruption we just can’t risk it.

1 Like

True :slight_smile: I wanted to look at some data to back it up, but kids/health/work/the holidays… And as your post implies, testing on tons of well-behaved examples gives exactly zero coverage of the funky use cases that might be out there.

In general, I understand why it’s reasonable for a language as popular and widely used as Python not to rush a transition like this. Better minimize any backlash, for everyone’s sake.

This made me think though – in that case, shouldn’t maybe the default input encoding be utf-8-sig, rather than plain utf-8? I know the UTF-8 BOM is discouraged, and even Windows Notepad has stopped adding it by default a few years ago AFAICS, so it’s probably (hopefully) not super common. Which just makes it easier to forget to strip it when it does turn up, and then:

>>> text = "cats and dogs".encode("utf-8-sig").decode("utf-8")
>>> print(text)
cats and dogs
>>> text.startswith("cat")
False

Dunno. Maybe it’s being too paranoid, but other than that, I can’t really see a downside? As with not rushing the transition – better safe than sorry.

1 Like

utf-8-sig doesn’t solve silent data corruption because it accepts UTF-8 without BOM.

>>> s="こんにちは".encode('utf-8').decode('latin1')
>>> s  # artificial "correct" string.
'ã\x81\x93ã\x82\x93ã\x81«ã\x81¡ã\x81¯'

>>> b = s.encode('latin1')
>>> b  # artifical "correct" latin1 string.
b'\xe3\x81\x93\xe3\x82\x93\xe3\x81\xab\xe3\x81\xa1\xe3\x81\xaf'
>>> b.decode('utf-8') == s  # silent data corruption
False
>>> b.decode('utf-8-sig') == s silent data corruption
False

UTF-8 with BOM helped some MS applications chose encoding. When text file is starting with BOM, it is UTF-8(-sig). Otherwise, it’s legacy encoding.

It worked well on Windows when UTF-8 without BOM were minority on Windows.
But for now, it doesn’t work well. Microsoft now uses UTF-8 without BOM by default.

I didn’t mean it solves silent data corruption of legacy-encoded text files, I meant it solves silent data corruption of UTF-8-with-BOM-encoded text files (by correctly stripping the BOM instead of leaving it around). As I said, I understand these are in a (hopefully ever shrinking) minority, but since it doesn’t have any drawbacks for plain UTF-8 without BOM, why not use it?

I don’t want to call it “corruption”, because no data is lost and user can strip the BOM after decode.

There are drawbacks:

  • utf-8-sig is much slower than utf-8, because it can not use many optimization for utf-8.

    $ python3 -m timeit -- 'b"abc".decode("utf-8")'
    5000000 loops, best of 5: 42.1 nsec per loop
    $ python3 -m timeit -- 'b"abc".decode("utf-8-sig")'
    1000000 loops, best of 5: 355 nsec per loop
    
  • Many classes are designed to use single encoding for reading and writing.
    Supporting “utf-8 for write, utf-8-sig for read” increase complexity and will cause bugs, and we need to write a lot of test cases.