“Bottomless pit” ![]()
I just played with seeing how bad things could get, on 64-bit Windows, under the relesed 3.14.4.
Very simple program: a deque with maxlen 1000. Each iteration pushes a new self-referential class instance, with a payload of 1MB. So deque throws away the oldest to make room for the newest, and after 1000 iterations reaches a steady state of 1GB of reachable payloads.
I’m not sure what happens accepting all defaults. Aftet 150_000 iterations, psutil’s idea of “rss” was still reaching new highs, about 15.6GB when I stopped. But it doesn’t just climb, it falls too. I’m guessing the OS is flushing RAM to swap file (it’s a 16GB box). But I’m not looking for details here: just “how bad could things get?”. Pretty bad ![]()
BTW, at iteration 1000, rss was about 1GB, almost wholly acconnted for by reachable payloads. No mystery there!
And if I do gc.collect() at the end of each iteration, it stays at about 1GB “forever”.
Setting threshold0 to 100 (the default is 2000) instead made a difference, but not much: still reached new highs at 150 thousand iterations, but the peak rss was “only” 13.5GB, at least through 50 thousand iterations.
Setting threshold0 to 1 msde it “almost sane, kinda”: it reached peak rss of 2.4GB on iteration 56_273, and it stayed very close to that across the next million iteratioms.
At threshold0=2, through half a million iterations peak rss didn’t break 3GB, but was still reaching new highs (both falling and rising).
pymalloc isn’t relevant here: the class instances are small (it’s their payload attributes that are large), and it’s neither allocating nor releasing arenas.
So it’s a combinarion of how inc gc is working, and the pragmatics of how Microsoft’s malloc() family behaves under this kind of load.
How can all this be “fixed”? Don’t know - I don’t even have a start a mental model that fits all the observed behaviors. So it’s good that this contrived program is utterly unrealistic
,