Let’s take a step back here.
Whenever you do anything with Python source code, it’s first compiled into bytecode. You can play around with that interactively:
>>> compile("""print("Hello, world!")""", "-", "exec").co_code
b'\x97\x00\x02\x00e\x00d\x00\xab\x01\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00y\x01'
Okay, that’s not very readable, is it. Let’s ask Python to take that bytecode and disassemble it.
>>> dis.dis(compile("""print("Hello, world!")""", "-", "exec"))
0 0 RESUME 0
1 2 PUSH_NULL
4 LOAD_NAME 0 (print)
6 LOAD_CONST 0 ('Hello, world!')
8 CALL 1
18 POP_TOP
20 RETURN_CONST 1 (None)
>>>
(You can also call dis.dis() with the source code itself, and it’ll compile and then disassemble.) This shows what Python is actually doing. It’ll vary a bit from one version to another, but broadly speaking, you should be able to see that it’s looking up “print”, loading the string literal, and calling that.
The reason for the .pyc files is that this can be a bit of work. Not a HUGE amount of work, but it’s some. So once it’s been done once, Python dumps that out into a file, making it quicker next time. The file itself isn’t particularly significant, it’s just a cache of what the interpreter has built.
And that brings us to the read-only file system problem. Well, actually, not much of a problem! What you were wondering is correct: the behaviour is basically the same as DONTWRITEBYTECODE. There’s no in-memory .pyc file, but there is the in-memory compiled code.
Great question, and very hard to figure out. It’ll slow down module imports, but that’s all. So for a long-running program (eg a web app), there won’t be much impact, since the imports all happen once and then that’s it; but for a quick script, where you’re dominated by startup and shutdown time, having those .pyc files can significantly reduce the overhead. You would have to measure for yourself.
Hope that’s enough info to make a reasoned decision!