Does anyone use `-O` or `-OO` or `PYTHONOPTIMIZE=1` in their deployments?

I expect the answer to the question in the subject for most is “No”… In that case, you are not who I’m interested in hearing from.

It’d be good to hear from CPython users who actually use -O, -OO, or PYTHONOPTIMIZE=1 to have Python elide assert statements (performance?) or (-OO) omit docstrings from bytecode (memory savings?).

Some idea of:

  1. Why you do it?
  2. How and if you measure the benefit of doing so and how often do you re-measure?
  3. Similarly, if you own PyPI packages, do you routinely test your packages in -O and -OO modes? If so, why?

would all be interesting to hear.

I’m asking because if there aren’t real world practical uses (not theoretical, we can all make those up), it’d be simpler code maintenance wise for us to drop support for these doing anything (make them no-ops) in some later CPython release.

They usually aren’t perceived to actually have much benefit compared to the way the CPython code worked back in the 90s when they were added. I’m wondering how accurate that perception is among users.


We had to add a custom codebase test to verify that importing every module works with -OO, because we had a botched a release once, where a module tried to naively string format a dynamic docstring, which of course blew up in -OO mode.

I guess the fact that a user reported the problem points to some folks somewhere using these modes, but I’d be happy for them to go away, personally.


There were some users who piped up in this[1] thread a few months ago. In particular this post from @carljm. I guess stripping docstrings saves a lot of memory.

  1. surprisingly contentious ↩︎


Yes I’ve used -OO / PYTHONOPTIMIZE=2 in the past to save on size when building an executables with PyInstaller.

I usually initially found I was able to save ~5% of the executable size on a given project.

I did not remeasure for the same project more than once because it was essentially free saving on bandwidth and disk size with no impact on the client experience of the app.


Yes to -O by default, no to -OO with a couple exceptions.

Why: Performance costs

Removing assertions mostly, tests run with -O.

I maintain typed code bases where assertions are used to inform the type checker of a type in a way that doesn’t involve a runtime cost where type-checking either can’t narrow (due to limitations of the type system) or where the means for it to would be more expensive. This pattern is only used where we invariantly know the assertion would pass, with no misuse of assertions for exceptions, and frequently with dependent types.

As for measurements, measurements prompted it, not the other way around.

assert isinstance(...) was showing up in profiling due to its presence in a tight loop. Given that -O only sets __debug__ to False and elides assertions, we’ve made no effort to continually measure for gains.

Removing -O would be seen as detrimental unless it was because it was the default behavior, especially if the situation where this was the only way to handle this case with static typing was still the case, however, that hasn’t really happened, and attempts at it were met with people saying “well, you shouldn’t need that” of “people don’t really use typing.cast” (see: Cast syntax for static typing), which at the time, I didn’t find worth getting into that with people, as the workable solution exists already with -O. We don’t use typing.cast because it has a runtime cost, assert isinstance doesn’t with -O, and the code isn’t public, so it isn’t going to show up in a code search on github. I’m sure other people have similar cases with finding that typing.cast is inferior to assert isinstance.

-OO is used in anything deployed to IoT for memory use reasons. In the cases I currently need to care about, we could probably drop one of the O’s there without it becoming problematic, but I don’t think this is true of all IoT uses of python.


I attempt to use it everywhere where I have some service that’s long running, though with some tooling’s defaults it’s not as useful in a couple code bases I work on as it could be as some linters default to warning for asserts so they never end up in the code.

I’ve never really measured measured the impact but I am more confident in sprinkling asserts everywhere I think some debug fail early path would be useful. I also noticed it being used quite a bit when looking thing up in source code of some modules (datetime comes to mind from the stdlib but that doesn’t seem to use it too much) so skipping those seemed like a good idea.

A while back while doing things with PyInstaller I also noticed a noticeable size decrease with -OO stripping all docstings, ultimately still a mostly meaningless decrease but was definitely noticeable.

1 Like

Yes, we test Pillow with both PYTHONOPTIMIZE=1 and PYTHONOPTIMIZE=2 enabled for a couple of arbitrary CI jobs.

Added in 2018 after introducing __doc__.format(__version__), causing AttributeError: 'NoneType' object has no attribute 'format'.

This affected many projects who were using these flags: scipy, Bokeh, kivy/python-for-android, report lab.


I regularly use -O (strip debug code) for production application code to gain some extra performance and -OO (+= strip doc strings) for cases where I additionally need to reduce the shipped bytecode file sizes.

E.g. the compressed eGenix PyRun size goes down by some 8% using -OO last time I measured this. For PyRun, I also intend to use PYTHONNODEBUGRANGES with Python 3.11+ once I have it ported to 3.11 and later.

With -O you have to be careful with some code, since there are still Python programmers who use assert where they should be using if-statements.

Using -OO works well, except for some packages which use doc-strings to keep extra information. Some older parser generators had the grammar bits in method doc-strings. A few CLI tools use the module doc-string for printing out a help screen. Of course, interactive help also doesn’t work, but for production apps, this is hardly ever needed.

Both cases can be addressed, of course, by simply not using the options.

BTW: It would be useful to have more flexibility with these options to be able to switch off certain things individually, e.g.


With debug switching off debug statements, docs removing doc-strings, errors removing detailed line error information, etc.

I’m sure the upcoming JIT will provide more ways to optimize things, so the list could be extended.


As an aside: is there a way to configure a CLI tool to run with -O or -OO? It can be done manually with python -O -m [tool] or with the environment variable but I couldn’t find a way to for the tool itself to specify this.

1 Like

Do you ever actually measure this supposed performance gain?

What kind of application and in what code are the pile of asserts that lead to measurably lower performance?


1 Like

BTW thanks everyone, these are useful responses!

We use

python -OO -m compileall -b

within a multi-stage container image build to minimize the size of our final container image.

In our case we want to strip out docstrings and comments specifically (we only copy the .pyc files into the final image).


It all depends on how the tool is implemented. If it’s relying on a shebang line you can add it there, otherwise I don’t think it’s a dynamic flag you can flip on mid-execution.

1 Like

Makes sense. I was thinking on particular of entry points defined in a pyproject file.

We use -O mainly to remove code under if __debug__ in the production environment.

1 Like

We don’t use asserts that much, but do put debug code into if __debug__: clauses, to have this removed from production code.

Such debug code often processes the current context in various ways to make it more suitable for debugging and logging purposes, e.g. dumping raw data as JSON or creating stack traces. If you leave such code in heavily used functions or methods, things slow down significantly.

We don’t often measure such effects, since during debugging, we don’t really care that much about performance. Making sure that functionality works is much more relevant.

And it’s good to know that those code paths can easily be removed later on from production code, without actually changing the code base, by simply using a command line switch. This moves the decision to run with full debug information to devops.

The alternative would be to either comment out such debug code (changing the code base and making it inaccessible to devops) or to add extra if debug_level > 1: throughout the code, which costs less, but still requires global lookups and comparisons. In tighter loops, this can have an effect on performance. Plus we’d have to add extra support to expose these variables to devops, e.g. via os.environ.

The -O solution is much better in this respect.


You can add the options to the shebang line of the script:

#!/usr/bin/env -S python3 -O

I’m not aware of a sys module way to set the optimization flag dynamically, but you can dynamically compile and run code in optimized mode using compile().

We did add an API to eGenix PyRun to set the flag dynamically, since we needed it in order to be able to configure the interpreter using Python. For Python 3.11+ things changed a lot in this respect and we’ll have to figure out a different way to do this, since the Python C runtime globals no longer work dynamically (they are only read during startup and then managed inside a PyConfig struct).

Covering 4 Python engineering positions I had held:

At 3/4, never used -O in any circumstance.

At 1/4, 5 years ago, effectively used -O via PyInstaller, most likely to help obfuscate the underlying source code; a mature product (py2->py3 transition was being done at that time); performance tests did not show any real difference; had to keep that build because one senior engineer insisted on security through obscurity.

Same company, another/new product/code base: switched to Docker, no -O.

1 Like