Yes, I believe this behavioral is normal. Only Python dicts can be directly translated to JSON objects in the json module, see the table in json.JSONDecoder(). I’m not sure if there are any technical limitations/reasons as to why Mapping and MutableMapping wouldn’t be supported though.
This might be worth opening an issue on bugs.python.org (check for duplicates first though). If you do, be sure to add the currently active core developers that are maintainers for the module to the nosy list, “rhettinger” and “ezio.melotti”.
And I said that in 99% of custom Mapping, __iter__() returns iter(self._dict), so it can use PyDict_Iter anyway. It have just to check if tp_iter is not changed, as PyDict_Merge do.
And if tp_iter is changed, PyDict_Merge uses a slower method, but in C, not in Python.
Anyway, I repeat, dict.update(Mapping) works, even if tp_iter is changed. We have not to do dict.update(dict(Mapping)). And furthermore I’m not sure this is more fast…
Furthermore, there are Python implementations of json (de)serialization that are way more faster, as orjson. So it does not seems to me that speed is a priority for json.
I continue to not understand… obviously I can’t test your example, since Mapping is not supported by json… but you’re really saying that, for data types that json supports directly, custom (de)serialization can’t be done? It seems very strange to me.
No, PyDict_Merge can not use PyDict_Iter if other is not subclass of dict.
Remember, you referred the performance as reason for supporting Mapping in the first comment.
“Anyway, I repeat…”, and “speed is not a priority” does not make sense at all.
OK, stop about performance. It can not be a reason for support Mapping.
You can use default option to customize serialization of types which are not supported by json. Read the manual. There is enough examples.
You can not customize serialization of types json supports (e.g. str, int, list, dict) with the default because default call back is not called for them.
And this is IMHO wrong, but, as I said, even if the fast method is not used, it’s used the slow method, without forcing the coder to convert the object to a dict first.
Performance is not secondary, but the first reason is elegance and practicality. If json supports also collection.abc subclasses, it’s more elegant and practical to write json.dumps(Mapping) instead of json.dumps(dict(Mapping)). And it could be also fast as json.dumps(dict), if json will simply check if the object is a subclass of Mapping and tp_iter is equal dict_iter.
So your proposal is that json should not support any other type natively anymore, I suppose.
@Marco_Sulla please hold on and take a time for meditating on @methane answers.
He is completely correct, I’m 100% sharing his opinion.
If the meditation is not enough, I suggest making a patch with implementation of your proposal. I expect you’ll see a lot of failing tests but it can be the excellent exercise.
After getting tests green you can come up with your proposal again if you still want to get it done.
Ok, you’re right. He is completely correct, so please remove the support of dict.update(Mapping) and any other function or method that support directly Mapping, MutableMapping, Sequence and any other collections.abc subclasses without converting them previously to the respective builtin type, because they are not consistent with the json behavior.
Please don’t ask us, volunteers, to do some work.
Please don’t hesitate to make a prototype for demonstration of your awesome idea instead; the working example extremely helps with the idea defending.
The proven proposal weights much more than an abstract random thought, isn’t it?
Anyway, as I already said many other times, it does not matter, since other functions in CPython, like PyDict_Merge(), do a slow but generic iteration to get the keys and values of a generic mapping. json can do it without any problem.