PEP 704 - Require virtual environments by default for package installers

I tried to look up sys.real_prefix and discovered that that property is not present in the sys docs or the CPython code base. Recommending injecting a new attribute into a stdlib module seems like a very questionable thing to do, so I’d much prefer that that be removed from PEPs 668 and 704, or at least a disclaimer added that that is a legacy virtualenv thing and should not be used by other tools.

That seems like a reasonable set of options. There’s two question marks I think:

  • Rolling out such a change is nontrivial. You’ll have a mix of Python version with/without a patch, and Pip versions with/without the changed behavior. That’s bound to not go smoothly.
  • (2) is probably not solving anything. If there would be any “trampling” it’s because two package managers are trying to (un)install different versions of the same package. Ignoring the actual problem there and letting them both install a version of the package in different directories will lead to more problems than it solves probably. What version is going to be preferred? Which package manager now ends up with incorrect metadata? etc.
    • The correct solution is probably to either error out with an informative message, or let the two package managers cooperate so we end up with the correct version of the package in question (hard).

Transitions are always messy :slight_smile:

The EXTERNALLY-MANAGED behavior is in pip now, but not the --break-system-packages flag that allows a user to ignore it. I don’t think that particular behavior is relevant to Conda though, since I don’t suspect Conda would want to have the EXTERNALLY-MANAGED flag.

I think the change to avoid touching files that aren’t in the preferred scheme is already in pip as well, but I may be wrong.

In my experience, (2) solves quite a lot!

This isn’t some hypothetical new thing, it’s been in use in Debian for a very long time, and it’s been so successful there that Fedora has adopted a similar approach.

The problem that (2) solves roughly comes down to managing who owns some files on the filesystem. When you have two package managers who both think they own the same files, the mix ends up being horribly confusing for everyone involved. A typical sequence of events (without (2)) is something like:

  1. A user does apt install python-foo [1], which installs foo 1.0 into /some/path/site-packages/foo.py and hopefully installs /some/path/site-packages/foo-1.0.dist-info/* as well.
  2. A user then does pip install --upgrade foo, which uninstalls foo 1.0 (but it has no idea about how apt itself records installed packages, so it has no way to inform apt it’s uninstalled it) then installs foo 1.1 into /some/path/site-packages/foo.py and includes /some/path/site-packages/foo-1.1.dist-info/*.
  3. A user then does apt upgrade, which sees there is an update to python-foo, installs it, overwriting the foo 1.1 version of /some/path/site-packages/foo.py, reinstalls the /some/path/site-packages/foo-1.1.dist-info/*, leaves the /some/path/site-packages/foo-1.1.dist-info/* (because it has no idea it even existed) and reinstalls the /some/path/site-packages/foo-1.0.dist-info/*.

Now the user is super broken, and has no idea wtf is happening because everything is super wrong but in non obvious ways.

With (2), that ends up looking like:

  1. A user does apt install python-foo [1:1], which installs foo 1.0 into /some/path/dist-packages/foo.py and hopefully installs /some/path/dist-packages/foo-1.0.dist-info/* as well.
  2. A user then does pip install --upgrade foo, which leaves foo 1.0 but installs foo 1.1 into /some/path/site-packages/foo.py and includes /some/path/site-packages/foo-1.1.dist-info/*.
  3. A user then does apt upgrade, which sees there is an update to python-foo, installs it, overwriting the existing foo 1.0 version of /some/path/site-packages/foo.py, reinstalls the /some/path/site-packages/foo-1.1.dist-info/*, leaves the `/some/path/site-packages/stuff alone.

User is (roughly) happy, apt upgrade didn’t randomly break their system!

Ultimately these kinds of problems boil down to there being multiple sources of truth for what is installed and what isn’t installed, and two package managers with difference senses of what is installed and isn’t stomping over each other. The solution in (2) allows the person providing Python to carve out an area that pip can manage, while having their own area for themselves to manage that pip won’t touch.

This does of course mean that there are two versions of a package available on sys.path. Luckily sys.path has pretty easy to understand behavior when this happens: The import that appears earlier on in sys.path takes precedence. There are some edge cases where this can go awry, but generally they don’t happen too often, and they exist without (2) anyways so it’s not a new thing.


  1. Or Conda, or yum, or anything that doesn’t use .dist-info as it’s definition of what is and isn’t installed. ↩︎ ↩︎

This really isn’t any better for conda/pip mixing. Probably even worse, since conda is pip-aware and will see two different versions of the same package in its dependency list - something that should not happen and likely won’t work for the solver). You suggest setting sys.path so that pip-installed packages always get preferred, meaning that if conda decided that it needed python-foo 1.2, that now gets installed and then silently ignored. When two package managers are actively managing the same package, they simply cannot both be right if they’re installing different versions side-by-side.

I’m thinking that this is a little too easily said. Assuming my assessment that (2) is more harmful than helpful is correct (which I’m reasonably sure is the case), I think that there’s little that conda can do here - even patching out the behavior in pip isn’t reliable, because using pip install -U pip in CI scripts is quite common. It forces users to add an opt-out with a scary name to every pip invocation, which isn’t great. So I agree with this:

2 Likes

Interesting. Maybe my assumptions about how conda works are incorrect. I’ve never actually used it, it doesn’t solve any problems I have in ways that I want them solved (I know that it does solve problems other people have though).

Does conda have it’s own metadata that it records what packages were installed outside of the Python level metadata? I assume it must since it can package things that aren’t metadata as well. I would expect conda to ignore the pip installed packages completely TBH.

Right, there’s no generic answer to being right.

If someone is using two package managers they’re fundamentally taking responsibility for controlling the correctness of the two situations.

However, what we can get “right” or “wrong” in a generic way is how these two systems interact at the filesystem and metadata level. While producing a potentially broken system by having foo 1.1 installed instead of foo 1.2 is not a great outcome, it’s the outcome that the user themselves is responsible to resolve. What isn’t an acceptable outcome is when the two systems end up overwriting (or removing) files that the other system expects to be there, without ensuring that you do it properly (e.g. update the metadata of installed packages, don’t leave behind left over files, etc).

The recommendation in (2) to give each system their own space to manage comes from the fact that the chances that we teach every package manager about pip, and pip about every package manager such that it can correctly interact with each others files is practically zero, so we can at least ensure that the metadata of these systems correctly reflects the real state of the world, and users end up with a deterministic end result, including through implicit upgrades over time.

Whether that end result is a set of packages that work together or not is the responsibility of the person who decided to mix two packaging systems together.

The recommendation to have pip be preferred is borne out of the idea that if the conda (or apt, or yum, or whatever) package provided what the user wanted in that instance, they would most likely already be using that since that’s the “default” package manager in that context.

Therefore if the user has installed something with pip, they’ve explicitly chosen to leave the conda/apt/etc ecosystem behind and to take responsibility for the correctness of their environment.

2 Likes

I think this is being a bit too harsh on users. People don’t use conda or pip because they want to “take responsibility” for anything. They use them because they want to install packages. As long as pip does not provide a way to handle non-Python dependencies, people will always have a need for conda (or something like it). That’s not the user’s fault or something that should mean they have to take on an additional burden. It’s the fault of pip (and/or the rest of the Python packaging ecosystem) that it doesn’t provide a way for the user to get what they need.

Likewise, as long as there are pip-installable packages that aren’t also conda-installable, there will always be a need for pip. That’s also not the user’s fault and also not a case where we should cast them loose and sorry, “Sorry, you’re on your own,” It’s arguably the fault of the package author for not making their library available via conda.

The overwhelming common case is that someone uses pip inside a conda environment because the package they want doesn’t have a conda-specific version, and I think it would be good to have a solution that makes that case work okay.

More generally, I think we should think about “what is it that users want to do” and try to design an overall system that meets those needs, rather than nibbling around the edges of the problem and declaring common cases out of scope and saying that it’s up to the user not to screw up their environment. No one wants to screw up their environment. The problem is that the existing tools force them to take that risk because they don’t do what users need them to do.

4 Likes

It’s not being harsh on users it just stating the reality of the situation. It’s not different than pointing out that if a user decides to install from a particular repository, they are taking responsibility for the act of choosing to install from that repository.

Given a set of dependencies, somebody has to make sure that those set of dependencies work together, this process is generally called system integration, and one of the value adds that curated distributions like Debian, etc provide is that they’ve taken some level of care to ensure that the set of packages that they are providing actually work together.

Whenever a user decides to go outside of that system of curated packages, they’re leaving behind the set of packages that have been tested together and are creating an environment that their system integrator has never had a chance to test, and thus they must act as their own system integrator in that case.

To this end, anyone installing packages from PyPI is taking on the system integrator role for their environment, because there is no system integrator on PyPI.

That’s not blaming them for problems, it’s just pointing out the reality that when you mix together a set of packages that have never been tested or assured to run together, there’s a good chance you’re going to run into problems, and since the end user is the one who ultimately decided to mix those versions of packages, they’re the ones ultimately responsible for the state of that system.

Of course it would be nice if someone out there was testing every possible permutation of installed projects and ensuring they worked together, but the simple fact is nobody is, and that’s unlikely to be something that can reasonably be provided.

That doesn’t mean that we just write those users off. Part of the goal of the current recommendations is to make having two different package managers operating on the same Python to have something resembling a sensible and deterministic outcome from doing that. We can’t make sure the set of versions that are installed work together, but we can ensure that you don’t end up with errors due to the two package managers fighting over who controls the files.

3 Likes

Yes, it has its own metadata in envname/conda-meta/ as a JSON file per installed package, which looks like:

{
  "build": "py_0",
  "build_number": 0,
  "channel": "https://conda.anaconda.org/conda-forge/noarch",
  "constrains": [],
  "depends": [
    "python"
  ],
  "extracted_package_dir": "/Users/rgommers/mambaforge/pkgs/alabaster-0.7.12-py_0",
  "features": "",
  "files": [
    "lib/python3.9/site-packages/alabaster-0.7.12.dist-info/INSTALLER",
...

Full contents:

{
  "build": "py_0",
  "build_number": 0,
  "channel": "https://conda.anaconda.org/conda-forge/noarch",
  "constrains": [],
  "depends": [
    "python"
  ],
  "extracted_package_dir": "/Users/rgommers/mambaforge/pkgs/alabaster-0.7.12-py_0",
  "features": "",
  "files": [
    "lib/python3.9/site-packages/alabaster-0.7.12.dist-info/INSTALLER",
    "lib/python3.9/site-packages/alabaster-0.7.12.dist-info/LICENSE",
    "lib/python3.9/site-packages/alabaster-0.7.12.dist-info/METADATA",
    "lib/python3.9/site-packages/alabaster-0.7.12.dist-info/RECORD",
    "lib/python3.9/site-packages/alabaster-0.7.12.dist-info/WHEEL",
    "lib/python3.9/site-packages/alabaster-0.7.12.dist-info/entry_points.txt",
    "lib/python3.9/site-packages/alabaster-0.7.12.dist-info/top_level.txt",
    "lib/python3.9/site-packages/alabaster/__init__.py",
    "lib/python3.9/site-packages/alabaster/_version.py",
    "lib/python3.9/site-packages/alabaster/about.html",
    "lib/python3.9/site-packages/alabaster/donate.html",
    "lib/python3.9/site-packages/alabaster/layout.html",
    "lib/python3.9/site-packages/alabaster/navigation.html",
    "lib/python3.9/site-packages/alabaster/relations.html",
    "lib/python3.9/site-packages/alabaster/static/alabaster.css_t",
    "lib/python3.9/site-packages/alabaster/static/custom.css",
    "lib/python3.9/site-packages/alabaster/support.py",
    "lib/python3.9/site-packages/alabaster/theme.conf",
    "lib/python3.9/site-packages/alabaster/__pycache__/__init__.cpython-39.pyc",
    "lib/python3.9/site-packages/alabaster/__pycache__/_version.cpython-39.pyc",
    "lib/python3.9/site-packages/alabaster/__pycache__/support.cpython-39.pyc"
  ],
  "fn": "alabaster-0.7.12-py_0.tar.bz2",
  "license": "BSD 3-Clause",
  "link": {
    "source": "/Users/rgommers/mambaforge/pkgs/alabaster-0.7.12-py_0",
    "type": 1
  },
  "md5": "2489a97287f90176ecdc3ca982b4b0a0",
  "name": "alabaster",
  "noarch": "python",
  "package_tarball_full_path": "/Users/rgommers/mambaforge/pkgs/alabaster-0.7.12-py_0.tar.bz2",
  "package_type": "noarch_python",
  "paths_data": {
    "paths": [
      {
        "_path": "site-packages/alabaster-0.7.12.dist-info/INSTALLER",
        "path_type": "hardlink",
        "sha256": "ceebae7b8927a3227e5303cf5e0f1f7b34bb542ad7250ac03fbcde36ec2f1508",
        "sha256_in_prefix": "ceebae7b8927a3227e5303cf5e0f1f7b34bb542ad7250ac03fbcde36ec2f1508",
        "size_in_bytes": 4
      },
      {
        "_path": "site-packages/alabaster-0.7.12.dist-info/LICENSE",
        "path_type": "hardlink",
        "sha256": "c55456cce70d1a7247e507532b2f7a528805fa8184ee806630055d690f7569e4",
        "sha256_in_prefix": "c55456cce70d1a7247e507532b2f7a528805fa8184ee806630055d690f7569e4",
        "size_in_bytes": 1555
      },
      {
        "_path": "site-packages/alabaster-0.7.12.dist-info/METADATA",
        "path_type": "hardlink",
        "sha256": "789c239a9af3080fafe7c7f6209e5e82122b3d69a75265af0b0a0be5e78f3fba",
        "sha256_in_prefix": "789c239a9af3080fafe7c7f6209e5e82122b3d69a75265af0b0a0be5e78f3fba",
        "size_in_bytes": 1980
      },
      {
        "_path": "site-packages/alabaster-0.7.12.dist-info/RECORD",
        "path_type": "hardlink",
        "sha256": "e7d94b2713424221fd2629a460b9dc6416c26c322afab61472c6b380d79d443e",
        "sha256_in_prefix": "e7d94b2713424221fd2629a460b9dc6416c26c322afab61472c6b380d79d443e",
        "size_in_bytes": 1557
      },
      {
        "_path": "site-packages/alabaster-0.7.12.dist-info/WHEEL",
        "path_type": "hardlink",
        "sha256": "0ab0c5ae89b2d0198d633c9917254894a7536b9660d91041b6332413b828abbf",
        "sha256_in_prefix": "0ab0c5ae89b2d0198d633c9917254894a7536b9660d91041b6332413b828abbf",
        "size_in_bytes": 110
      },
      {
        "_path": "site-packages/alabaster-0.7.12.dist-info/entry_points.txt",
        "path_type": "hardlink",
        "sha256": "f0f40104df398f48139ec4cc76680d4f5b97ee57214381da0e806489b6b91b69",
        "sha256_in_prefix": "f0f40104df398f48139ec4cc76680d4f5b97ee57214381da0e806489b6b91b69",
        "size_in_bytes": 44
      },
      {
        "_path": "site-packages/alabaster-0.7.12.dist-info/top_level.txt",
        "path_type": "hardlink",
        "sha256": "bc320e66ec7ad8e0ef1fcedbc929c04b3a6c86d1ff4a27f2b4a60948ce778833",
        "sha256_in_prefix": "bc320e66ec7ad8e0ef1fcedbc929c04b3a6c86d1ff4a27f2b4a60948ce778833",
        "size_in_bytes": 10
      },
      {
        "_path": "site-packages/alabaster/__init__.py",
        "path_type": "hardlink",
        "sha256": "0c7a4e32ca6cdd370f7bb9f47bf44d48ad8aa2789cbcf7f07e4155bcbe1e7e2e",
        "sha256_in_prefix": "0c7a4e32ca6cdd370f7bb9f47bf44d48ad8aa2789cbcf7f07e4155bcbe1e7e2e",
        "size_in_bytes": 741
      },
      {
        "_path": "site-packages/alabaster/_version.py",
        "path_type": "hardlink",
        "sha256": "ed1b9d364860b7649180323a480c9645a4b0fcf9b0da6e0692e3e3f68a4c5c41",
        "sha256_in_prefix": "ed1b9d364860b7649180323a480c9645a4b0fcf9b0da6e0692e3e3f68a4c5c41",
        "size_in_bytes": 81
      },
      {
        "_path": "site-packages/alabaster/about.html",
        "path_type": "hardlink",
        "sha256": "3ca58a7890f8606c2796df6ba4370b10b5f8270793cf2cc44ff07910cd5af12a",
        "sha256_in_prefix": "3ca58a7890f8606c2796df6ba4370b10b5f8270793cf2cc44ff07910cd5af12a",
        "size_in_bytes": 1881
      },
      {
        "_path": "site-packages/alabaster/donate.html",
        "path_type": "hardlink",
        "sha256": "475b7c2fe09d8f483f25f4fcf6bef8c336233540a2a88a67d173cfc4fe4f8f89",
        "sha256_in_prefix": "475b7c2fe09d8f483f25f4fcf6bef8c336233540a2a88a67d173cfc4fe4f8f89",
        "size_in_bytes": 866
      },
      {
        "_path": "site-packages/alabaster/layout.html",
        "path_type": "hardlink",
        "sha256": "c8fb47bafee7f257f361b6485ae68299ef642e6f17b16aacffbdff65155ebf38",
        "sha256_in_prefix": "c8fb47bafee7f257f361b6485ae68299ef642e6f17b16aacffbdff65155ebf38",
        "size_in_bytes": 4285
      },
      {
        "_path": "site-packages/alabaster/navigation.html",
        "path_type": "hardlink",
        "sha256": "e82edc42c003914b254ef305728df8dab0c228cd8af762fa1b8da7421c8f4a9a",
        "sha256_in_prefix": "e82edc42c003914b254ef305728df8dab0c228cd8af762fa1b8da7421c8f4a9a",
        "size_in_bytes": 323
      },
      {
        "_path": "site-packages/alabaster/relations.html",
        "path_type": "hardlink",
        "sha256": "e4b1a2944c590d666e4bb1d767e3cb51237099e68fd7e9fbf7e9c3fdccbc57e9",
        "sha256_in_prefix": "e4b1a2944c590d666e4bb1d767e3cb51237099e68fd7e9fbf7e9c3fdccbc57e9",
        "size_in_bytes": 621
      },
      {
        "_path": "site-packages/alabaster/static/alabaster.css_t",
        "path_type": "hardlink",
        "sha256": "8b42d276dfa319d5ad1a71856a34a2e175905a94b227481fa0b4851ccc327ebf",
        "sha256_in_prefix": "8b42d276dfa319d5ad1a71856a34a2e175905a94b227481fa0b4851ccc327ebf",
        "size_in_bytes": 15648
      },
      {
        "_path": "site-packages/alabaster/static/custom.css",
        "path_type": "hardlink",
        "sha256": "39f23a6561786e3cb4e33e4a96562a1305a8b74c0d45dc215a64018692cd5d4c",
        "sha256_in_prefix": "39f23a6561786e3cb4e33e4a96562a1305a8b74c0d45dc215a64018692cd5d4c",
        "size_in_bytes": 42
      },
      {
        "_path": "site-packages/alabaster/support.py",
        "path_type": "hardlink",
        "sha256": "05891b14554bf705eb1104f97bec38ebb17fdddaf862d90d5de00bc789e87a13",
        "sha256_in_prefix": "05891b14554bf705eb1104f97bec38ebb17fdddaf862d90d5de00bc789e87a13",
        "size_in_bytes": 3963
      },
      {
        "_path": "site-packages/alabaster/theme.conf",
        "path_type": "hardlink",
        "sha256": "7a867be0c02879a864c95cbdcf5613e371c3823149c05d425eb933eb9f9b0020",
        "sha256_in_prefix": "7a867be0c02879a864c95cbdcf5613e371c3823149c05d425eb933eb9f9b0020",
        "size_in_bytes": 2346
      },
      {
        "_path": "lib/python3.9/site-packages/alabaster/__pycache__/__init__.cpython-39.pyc",
        "path_type": "pyc_file"
      },
      {
        "_path": "lib/python3.9/site-packages/alabaster/__pycache__/_version.cpython-39.pyc",
        "path_type": "pyc_file"
      },
      {
        "_path": "lib/python3.9/site-packages/alabaster/__pycache__/support.cpython-39.pyc",
        "path_type": "pyc_file"
      }
    ],
    "paths_version": 1
  },
  "requested_spec": "None",
  "sha256": "662690cace8f8a3e1358d01ddb8c019bf70ddfccd250220a6a488efc93ea5baf",
  "size": 14878,
  "subdir": "noarch",
  "timestamp": 1538580531000,
  "track_features": "",
  "url": "https://conda.anaconda.org/conda-forge/noarch/alabaster-0.7.12-py_0.tar.bz2",
  "version": "0.7.12"
}

And then it knows about all installed Python packages; the ones that aren’t in conda-meta are then assumed to come from PyPI:

% conda list
# packages in environment at /Users/rgommers/mambaforge/envs/scipy-dev:
#
# Name                    Version                   Build  Channel
alabaster                 0.7.12                     py_0    conda-forge
apexpy                    1.1.0                    pypi_0    pypi
appdirs                   1.4.4              pyh9f0ad1d_0    conda-forge
appnope                   0.1.3              pyhd8ed1ab_0    conda-forge
archspec                  0.1.4                    pypi_0    pypi
asttokens                 2.0.5              pyhd8ed1ab_0    conda-forge
...

For production environments one should really aim for everything coming from a single conda channel (packaging missing Python packages for conda-forge or in one’s own channel as needed). For dev environments though, having a mix of conda/pypi is common.

1 Like

Worth pointing out that this exists because Conda is trying to play nicely with pip. It didn’t always have this extra check, and would indeed be unaware of anything that pip had installed. (I believe for a while it didn’t include .dist-info/equivalent directories either, which mean pip was unaware of anything that Conda had installed.)

So we’re definitely on a track towards both ecosystems playing nicely together. If pip is able to recognise (from .dist-info) that it shouldn’t manage a Conda-installed package, and Conda is able to recognise (from conda-meta) that it shouldn’t manage a pip-installed package, we’re getting very close to people not easily breaking themselves.

2 Likes

The recommended way for this to happen is to have the other package manager install into a location that isn’t the preferred scheme in sysconfig.

Well, pip is already able to recognise packages that are managed by other package managers (via the INSTALLER file in .dist-info). So as long as conda is following the standard and creating that, pip should respect it (if that’s not the case, it’s a pip bug that I’d expect to get fixed with no major issues). If conda doesn’t create an INSTALLER file, the breakage is basically their own fault.

Conversely, pip writes the INSTALLER file when it installs packages - if conda doesn’t respect that, then IMO that’s a bug in conda. If they don’t want to fix that, then having distinct install locations is an alternative approach, but it does have a downside in that there’s a possibility of the same package being installed in both locations, not necessarily the same version. So respecting INSTALLER is better, but both ensure tools don’t tread on each other’s toes.

If conda isn’t comfortable with either of these two options, I think it’s time to stop worrying and simply accept that they will just “do their own thing”. I certainly don’t think adding a third option because they didn’t like the first two is particularly reasonable.

1 Like

Using INSTALLER to refuse uninstallation would go directly against the “multiple interoperating tools” idea.

If a package can’t be uninstalled using standard metadata – the RECORD file – then can be installed without the RECORD file. (That went in with PEP 627. Should it be clarified?)

4 Likes

As designed, I don’t think INSTALLER was intended to reflect “don’t touch me with something else” information. We could add a per-package EXTERNALLY-MANAGED to reflect that but skipping RECORD and storing that info externally is equivalent-ish.

1 Like

Inspired by this proposal, we now changed maturin’s behavior for maturin develop (which acts like pip install -e) so that after checking for a virtualenv/conda env, it will now next look for a virtualenv .venv in the current or any parent directory and use that if it exists (and error if it didn’t find any environment).

We’re also discussing a maturin run <args> command that acts like python <args> [1] but picks the right python interpreter through the same strategy, avoiding both accidentally launching system python and manual venv wrangling. imho this would also a good strategy for a general python launcher.


  1. execv the python binary on linux and a subcommand on windows, to be exact ↩︎

I’ve gone ahead and initiated a conversation with Conda folks on their discussion forum (it’s on Matrix, in case you are wondering, and I can’t figure out how to link to it).

I presented them with three directions that the pip + conda pieces of this could take:

  • Adopt the same behaviour as OS Python installations, moving to require virtual environments by default.
  • Add the opt-out to conda environments (except base).
  • Change nothing (i.e. withdraw this PEP because conda)

While the discussion is still ongoing, the general sense is towards the second bullet, which can be achieved using pip’s configuration stack + opt out.

I still need to catch up on the single vs multiple virtual environments aspect of this (I’ll reiterate that picking a default doesn’t make other approaches illegal or whatever, and they’re not mutually exclusive workflows either).

Awesome. This could also be an opportunity to get input on the related but maybe thornier question raised by @steve.dower in another thread:

What I’ve not seen written down before is an actual spec from the Conda side of how they would like pip to behave in the presence of conflicts when merging repositories.

Edit: looks like @rgommers already started this conversation a couple days ago, which is great.

Here’s a link to the initial message[1] in the chat room from @rgommers, the referenced discussion follows in several chunks and subthreads (that can be unfolded like in slack).

BTW, conda has a discourse now too, and @rgommers posted a summary (of the related discussions on DPO) there (I mention it as it’s easier to reference)


  1. you can click on a message and then there’s an option to create a link under the three dots. ↩︎

1 Like

That link sent me to a blank page with just a background image… (on iPad, if it matters).

I don’t really understand matrix yet, but it seems it can be accessed through several apps (the link sends me to a disambiguition screen to decide between them; I created it from getting to the right place through the original gitter-link that has been superseded but auto-forwards). FWIW, here is another link to that comment in a more(?) canonical app.

Yeah, on Windows the first link expects me to download an app (I only tried the default, maybe others are web-based). And the second takes me to somewhere I need to sign in to view.

So I guess no, if there’s no simple public viewing in a browser, I’ll pass. Thanks for hunting out the second alternative for me, though. If I care enough at some stage, I’ll see if I can find a route in that suits me.

There are some web-based apps available; sign-in with github should work out of the box AFAICT.