The solution you suggest does not work well for sharing objects between libraries, I give an example here:
Python packaging works really well for the use case of allowing third party libraries (like numpy) to provide foundational basis and be the lingua franca for other third party libraries, installing multiple versions of the same library would break that.
Classical “Nay-saying” and Not at all pertinent. In my suggestion, the default mechanism is unchanged. What I suggest is an overloading mechanism. I do agree that with such mechanisms you can always shot yourself in the foot. But at least you have full control and all the dependency conflicts can be resolved if you assume that any released version had a working dependency set. My solution removes the blocking points of the current solution. If you do no matter what, and use a set of dependencies that has no way to work, then it doesn’t work; but it’s “tautological”. The only real drawback of giving this full power to the user is that it may be used by crackers to do no matter what once they have set foot on the PC of someone else. But usually they do no matter what with or without this option. It’s just another thing to know about your system.
Sorry for the harsh tone, but davidism moved my topic from Ideas to Python Help which is inappropriate and looks kind of censorship/downgrading behind good looking noob management.
To give a full answer to your remark on data structures: Your code is your responsibility. When you use library A and library B and their data structures are not compatible, you do more or less boilerplate code to modify data structures so that they communicate through your code. Now, it is exactly the same problem, if you have to make data structures of A 1.1.1 communicate with A 2.2.2. You just do the same kind of boilerplate code. It may be inefficient, but it works. In Python, it may be just looping on the elements of the “in” structure to make it a new “out” structure. Once everything is dealed with on the levels below, you do the required “interfacing” work (not to be confused with the restricted meaning of an interface in OOP).
And there is already nested import in Python. The code, in Python itself, behind it would have to be modified, but the result would be quite close to what we already have.
Go ahead, implement this. Nothing is stopping you from writing a custom loader for your personal use.
Reminder:
requesting projects to change the way their code works is not going to fly
requesting users how they should structure their code is not going to fly
If you want this to be a true solution it needs to work without any performance penalty (ideally without any memory penalty either) and without any code changes except maybe invoking your tool.
You are going to notice that it doesn’t work for anything but toy projects. But if you don’t believe others claims in this regard based on years of experience and previous discussions, then the only way you are going to convince yourself is to try it.
And if against all odds you manage to create a perfectly functional system, great! That would be something word considered as an idea. A half-baked rant that retreats all the same problems and ideas as have been mentioned dozens of times before is not.
That libraries with conflicting requirements should get their own copies of the offending dependencies is the obvious bit. Nobody is too dumb to figure that one out. The question that nobody has been able to come up with a remotely non-naive answer to is how.
It is ambiguous when you say that it is not going to fly. It may mean “You can always ask, but nobody will do the change.” which is probable, whilst the fact that it would not work is false.
For the users having to structure their code, it is only if they need to handle such interfacing problems. So nothing is taken away from users, they don’t loose anything. They just have new solutions: for all dependencies conflicts below, they have a simple solution, for dependencies conflicts in their own code, they have a way to handle it through boilerplate code. It brings solutions to the table, previously they had none other than looking for other packages as dependencies or recode part of a dependency. Nothing is perfect, nothing is free of some burden, nothing justifies that on Internet the majority of persons that react are nay-sayers.
A solution with a performance penalty *notable only when it is used* is always better than no solution. Your argument is a classical fake argument of nay-sayers and doesn’t imply that they always do very optimized code. Moreover, when the Python interpreter resolves an import it must load the function tokens and their adresses to use them afterward; resolving two imports for two distinct versions of the same library would prepare similarly what is needed to use some adress when some function token is parsed without conflicting because of the distinct contexts (or the distinct tokens in the user code), I see no reason that the interpreter would be slowed down apart of the additional tokens to keep in memory; the penalty for the performance and memory would be measured; it would be mainly on the import mechanism that it could be slowed down very slightly in my opinion, and once the import is done, I see no reason for a substantial penalty. If you think otherwise, please explain your reasoning.
Fine, when you say that, you perfectly know that the entry cost on a project like Python is high, and that almost nobody has the time to understand the internals of a project like Python just to make a proof of concept and experience further bashing afterwards. I’m already struggling to redo most of my own Free Software work because of sabotage. Weeks or months of work that vanishes in thin air. I’m always struggling because of crackers and intelligence services and I don’t know who mess with my code to enshittify my life. If I take now two weeks of work to make a proof of concept, I am almost certain that they will not let me succeed or find a way to screw another of my projects during the time. I don’t have yet taken the decision to do or not a proof of concept but you don’t imagine how mad I am against people that steal the life of others and nobody helps.
You do not need to create something deeply embedded into Python. You just need to modify the import mechanism which is well documented, probably by overriding the builtins.__import__ hook and probably a custom loader. This can be done with pure python. I have previously implemented something similar for the purpose of running regression tests by importing two versions of the same library - it worked somewhat, but it had too many surprises for me to ever polish it up.
how mad I am against people that steal the life of others and nobody helps.
I believe you that you are mad - but that doesn’t make you entitled to other peoples time and attention, especially in a tech-discussion focused online forum.
Genuinely, it sounds like you should take a break from Free Software work. You are too aggressive for a tech-discussion focused forum and are interpreting stuff as a personal attack when it wasn’t meant that way. I don’t know your life situation, but I hope you have people in your life who you can talk to away from the internet.
Do you care to explain where are the identified blocking points?
From my point of view, you need the following ingredients:
given current state of the interpreter it is currently executing the code of the following import (package and version)
thus any new import there is resolved by consulting the overloading file for dependencies if any, and backing on the default mechanism (it can move inside the tree of nested dependencies in the overloading in parallel of the moves inside the code, when it is outside of this tree, it only needs to check when it’s back at the root in case further code executed is in an existing branch)
when an import is processed the current context gets the right adresses for tokens,
tokens are resolved with the right adresses in the current context as usual.
I don’t see a conceptual blocking point. If there is an ingredient of an interpreter that is relevant and that I don’t see, please point at it.
They’re already enumerated in the thread that Damian linked.
The only blocking point I would describe as completely unsolvable is libraries with compiled binaries in them. Loading two copies of a C library into one process will usually cause it to crash. On macOS, the OS just SIGKILLs the process.
But there are plenty of other issues that would require either breaking or massively degrading a lot of Python to overcome. As it is at the moment, Python itself has very little awareness of packaging. When you import PIL, it has no idea that PIL/__init__.py came from a PyPI package called pillow or what its version is.[1] This is actually a good thing since it allows Python packaging to evolve at a different cadence to Python itself as well as allowing for alternative packaging tools and all kinds of deployments to exist.[2] Packaging-aware imports would throw all that away.
You’d also have to rip up every assumption that modules can be keyed based on their names. That would include most of the import system, anything that touches sys.modules, pickling, multiprocessing (which uses pickle) as well as anyone else’s extensions or alternative providers for the import system. It isn’t just Python that would have to change.
And then there’s all the costs of people actually using it if it did work. We’d be brining the joys of sticky bugs and sticky security vulnerabilities as well as increased footprint, memory usage and initialisation time to Python.
Personally, I consider this to all be XY problem. If I see one of my dependencies is unstable or has an upper bound version constraint, I throw it out. With that done, I never get version conflicts, rarely need virtual environments and get to ignore all the fancy tooling for these things. Everything is wonderful from then on.