I think you are missing Eric’s point here.
Nobody denies that we can make some performance improvement, even if
only a small one, by moving the operation into C instead of pure
The question is, when does it matter? Under what circumstances is this
proposal a bottleneck in your code?
If it takes 500 microseconds (say) to sort your dict in pure Python,
using the obvious method of composing the items(), sorted() and dict()
functions, and you can save 2 microseconds (say) by moving the code
into C, under what circumstances does that 2 microseconds matter?
Obviously I made up those numbers, and they will depend on the size of
the dict and the speed of your CPU. But the point still stands.
It isn’t that performance doesn’t matter at all. But we care more about
speeding up basic functionality that about moving arbitrary combinations
of operations into C just for the sake of making it a bit faster.
It seems that not very many people care about sorting dict items. And
the performance benefit may not be very much. It probably won’t be a
bottleneck in your code. If you really care about this issue, to the
point that you are willing to write a PEP, you should:
find evidence that many other people care about sorting dicts
show evidence that applications spend a significant amount of
time sorting dicts (the profile module will be your friend here);
and demonstrate that it is plausible that moving these operations
into C will speed that up by a non-trivial amount.
If you know C, possibly the best way to do the third one is to actually
implement it youself and demonstrate a speed up.
Remember that sorting is an O(N log N) operation, while copying items
and constructing a new dict is O(N), so for large dicts, the time will
be dominated by the sorting, not the copying.