From my perspective, I think the UX is the least important aspect for a having a unified workflow - it’s the bikesheddiest part where everyone has an opinion, but what I’m interested in from the conda side is that it has enough abstraction power to deal with all sorts of non-python dependencies in a way that runs stably (i.e. no random ABI divergences and crashes between packages).
In other words, I think the UX can be polished, but the UX itself without a strong technological foundation is no major improvement of the status quo.
The impression I got from your comments re: conda over the years (I might very well be wrong) is that you haven’t used it much if at all in recent years, and that there are some usecases you’re interested in, which conda doesn’t do well or at all (e.g. running against a development version of python).
That’s fair enough, but overlooks a large chunk of problems that have been solved in a way that leads to - inter alia - certain heavy dependencies in the data stack being conda-only, because it’s essentially impossible to pull off with wheels (e.g. a lot of the Nvidia / rapids.ai ecosystem).
A lot of those benefits come with a substantial cost though, mainly in the form of a lot of integration work (making sure shared dependencies are unvendored, recompiling packages against new library versions, etc.), where conda-forge is essentially a cross-platform distribution, that’s being kept up to date by an army of bots and a substantial number of volunteers.
So the fit with the broader python ecosystem is not trivial, and there’s an infinite number of details to disagree about. Though by my reading of the role of the SC, they could bless a given solution as the “official” way (presumably after a large project to iterate to something acceptable for most people).
But again, it’s a vast problem space, with any number of possible local optima, and so charting a path from “we are here” to “be more like cargo
” is just a monumental undertaking IMO.