Python LTS and maintenance cycles

Insisting on repackaging everything as individual RPMs is one of the unfortunately-still-common practices I’m referring to when I suggest investing time in helping people advocate for software management process improvements rather than directly helping to perpetuate broken processes by hiding the inefficiency and ineffectiveness that they cause. Even Red Hat don’t encourage anyone to work that way anymore (not even their own developers outside the base OS team). (While I spent most of my time at Red Hat leading the software development for the RHEL hardware integration testing system, the last couple of years before I left in late 2017 were spent working for their Developer Experience group)

The idea behind the App Stream language runtimes in RHEL is that they provide the foundational layer that integrates well with the rest of the system, and developers then use the language level packaging tools they’re already used to build on top of that foundation. The resulting application is then what gets packaged (either as a container image or an RPM), rather than the individual Python packages.

For organisations that are comfortable with community repackaging efforts like EPEL, then it’s likely to be feasible to convince them of the merits of conda-forge, but even that requires a layered approach (since the forge concentrates on the components with binary extensions, many pure Python packages are still installed with Python-specific installers).

Tools like briefcase (Python), shiv (Python), Flatpak (Linux), PyInstaller (Python), and more, can then help turn those layered applications into artifacts that system administrators will be comfortable with managing directly.

2 Likes

I’m not sure if it’s best place to do so but based on this convo

I’ve made the suggestion about making it easier to see the python support cycle info from website

6 Likes

You described a solution that imposes a major cost on release managers, core developers, package maintainers, and end users. Many people explained this very well already. I won’t be rehashing that here.

I’ve been on the release team since 3.8, which just recently went EOL. The subject of LTS releases comes up every now and again, but ultimately I can tell you with high confidence that it won’t be happening. You’d create a stream of endless Python 2 to Python 3 transitions.

As a side note, part of the pushback you’re seeing stems from your use of sweeping statements like starting the topic with “Support […] should be”, or telling others here that they lack empathy, or that allegedly the PSF signals to users it doesn’t care about them. With all due respect, that’s not a productive way to handle disagreement.

Ultimately, an “unsupported” package or Python version doesn’t mean it’s deleted from your servers. It still continues to work as it did before. It will simply cease to receive updates. So the request here boils down to “I would like enterprise-grade support for $0/year”. That would be amazing, but such a price point can only be achieved by asking other people do more work. Small wonder there’s little excitement about the idea.

Even if we ignore the “who pays” angle, the reality is that there is no single governing body in our ecosystem that could possibly enforce what you want on the entire community. Using CPython’s release cadence as a forcing function for the community to stretch their support timelines would be problematic, but more importantly it wouldn’t even be effective, as people here tried to explain.

Predictable annual releases. Predictable support cycle. Predictable time commitment for release managers. Predictable core development support cost in terms of backports. Predictable support matrix for package maintainers. Allows package maintainers to use relatively new language features. Allows for incremental upgrades by users (no Python 2 to Python 3 rifts).

To reiterate, I don’t see a chance for the CPython LTS release idea to fly.

25 Likes

@jcampbell05 Thanks for trying to bridge the divide

You just said you will use Python 3.14 exclusively, so I’m not seeing how anything would change if there was an LTS version. You can do what you want for your projects.

“Arbitrary” is a technical term meaning without requirement. And many do drop support simply based on a schedule and not because they are going to make code changes that would break under previous versions. In some cases there may in fact be a good reason, but it’s not always documented in a PR, so there is no way for the outside world to know whether it’s arbitrary or not.

It is not the purpose of this forum to say yes or no. This is a forum for ideas. This may be the root of why people disengage from this community (I know several that have). Now it is clear that this proposal isn’t going anywhere, but some good discussion has come out of it. As people bring up points, I’ve responded. Whether you like an idea or not, each one is an opportunity to learn. While I’m seeing that from a few people, I’m also seeing a lot of unnecessary vitriol.

I agree with you on this. It’s my hobby and I do it because it brings me some kind of satisfaction. That said, for me the satisfaction is two-fold of solving problems and of helping others. I support WIndows in some libraries that deal terminals. It’s not very fun and I don’t particularly like it, but, because I know I’m uniquely qualified to do it, so I do it for the benefit of others.

I don’t know where you learned such bad habits, but please don’t propagate them. I already covered this in my original post, but all software must be installed through the system package manager to make the system auditable. This is pretty foundational to system security.
What you’re describing used to be permissible in development environments, but, with the increase in supply chain threats in the past few years, development environments are more and more often falling under the similar scrutiny as production systems.

It’s a good point and thanks for your other comments. I have taken this approach before and the responses vary widely. In some cases they don’t care as long as nothing breaks. In others, they are more enthusiastic and want to add in testing for that version. And in others, they won’t even consider even simple changes for a version that the PSF has marked as EOL.

You’re not wrong there. I still have 2.7 support in some projects, mainly because there wasn’t much of a win to drop it, but CI requires a different image then what everything else is using. In other projects we were prompted to drop 3.3 and 3.4 when we could no longer test them. The developer tools really are the biggest issue though.

The problem is, when you look at the seuptools release notes, for v59.7.0 is says “#2930: Require Python 3.7”. Then you go to linked issue it only has a link to the commit that changed the metadata and a link to an issue about 3.7 failing on Windows. There’s nothing to say why is was dropped.

pyproject.toml support was not great in Setuptools in the beginning. I’ve found you really need version >=65 if you’re using pyproject.toml. I’m not suggesting people don’t use new features. As it is today, it will always be a choice what versions individual projects support. I also think packaging standards are a bit different from language features. It’s one thing to be the ugly duckly, but another if you can’t fly with the flock. I’ll look into flit when I get some cycles. Thanks.

I’m going to ignore your negative and inaccurate comments. You’ll find your answers if you reread the thread with an assumption of good intent. Apologies if it wasn’t clear, but cooperation in this sense would be reducing the effort of making a Python package into an RPM so the cost of supporting new versions of Python on Enterprise Linux is less. The effort is considerably less now than in the past due to a lot of work by the Fedora Python SIG, but it’s still high touch. An example might be a pyproject.toml field for how to execute tests or a generic packaging mapping for additional metadata. Of course, these would still only be useful if package maintainers populate them.

Thanks for the feedback. This may be a culture difference, but I will keep it in mind for the future. I think it’s best to assume good intent. At the end of the day we’re all just trying to solve problems. What you don’t want is one side complaining the other got chocolate in their peanut butter and the other side complaining that no, they got peanut butter in their chocolate. Every problem is an opportunity if nothing else to learn something new, but you never know what might come out of it.

1 Like

If and only if free-threading becomes available, it would be a major milestone that could compel users to abandon older Python versions instantly, as there is no LTS commitment.

The addition of new syntax is not compelling from the user’s perspective.

Dropping support on a schedule is not arbitrary especially if it is done in coordination like with SPEC 0. The point is to coordinate releases across different packages. A lot of compatibility constraints across packages are also implicitly specified via requires-python because multiple packages test current versions of each other for a consistent set of Python versions. If package A puts out a release for Python 3.6 after dependent packages have stopped testing 3.6 then there is a risk that package A breaks dependent packages B, C etc on 3.6. A decentralised package repository like PyPI does not have a way to test if A will break B and C before that release is put out.

It is in fact important that packages coordinate their support schedules which does mean that often support should be dropped on a schedule even if there is no particular breaking change or new feature that motivates dropping support. This is why SPEC 0 exists and clearly states that base scientific packages will not suddenly push new releases to old Python versions. This makes it possible that once the package maintainers have all moved on and no longer “support” Python 3.6 you can still install functioning packages on Python 3.6 because the older versions of all of the packages that used to work still work.

It might be difficult to keep track of what is going on in the issue tracker but it sounds like there was a specific reason for dropping that support. It doesn’t really matter whether you can interpret things to know whether or not you think the reasons were good enough. They decided to drop the support because Python 3.6 was EOL and there would have otherwise been some cost in continuing to support it.

I have to first respond to some “non-content” parts of this thread. After that, I’ll go back to the original idea one last time in brief at the end.

My use of the word “no”, quoted, was summary. Many people think this is a bad or mistaken idea.

If only ideas with full community support appeared on this board, it would probably mean we no longer have rich and interesting ideas to discuss. That would be a sad day, signalling the death of this forum.

But it is very tiring to try to engage in a thread when you offer a contrary opinion, well reasoned, and it isn’t accepted as valid by the poster. That happens too often, and it makes those threads difficult to read, difficult to add to, and generally un-fun.
It’s also unproductive – if you don’t engage with the contrary opinions, how will your ideas grow, mature, and get better by the richness of other people’s context?

I don’t think you’re seeing vitriol or anger. If you are perceiving that, then I at least believe you are in error – your idea is being criticized, but not excessively so. You are not being criticized for having that idea.

I wouldn’t even say “I don’t like this idea” about this thread. I just think that you’re wrong about how to solve the problems that you’re facing because I think the solution you have asked for:

  • imposes too large of a burden on people without compensating them with some other benefit ($ is only one kind of benefit)
  • won’t have the effect that you want it to have

And I acknowledged earlier that I don’t know a lot about some of the relevant contexts, so I’m open to learning how I might be mistaken and this really is the best solution.

A lot of people in this thread, myself included, have taken your responses to mean that you aren’t aware of the benefits for library maintainers of a shorter release lifecycle. When you point at things like “upgrading because the walrus operator is cool”, that intentionally ignores the fact that the tooling ecosystem will leave 3.7 behind, and continuing to support it will sooner or later impose undue costs on the maintainer – a scenario which you’ve demonstrated that you are familiar with. That kind of comment derails the discussion, because now we’re wasting time talking about something that everyone in the thread knows: if you try to support an EOL CPython version, your life is going to get harder as time goes on.

This reads as a bad faith interpretation of Paul’s posts.
He doesn’t agree with you.

I actually just went back and reread his comments carefully in case there was subtext which I missed.
There was one instance in which he got a little snarky, with “you get what you pay for”, but it was consistent with his overall stance that you should pursue paid support channels (his example was purchasing RHEL with its long support guarantees) if you want this kind of 10-12 year support cycle.

You can disagree with him. He might even be wrong.
But when you intentionally misread people’s posts in a negative way, you make the thread an unpleasant place for anyone to spend any of their limited time on earth.


Now back to our regularly scheduled programming.


(I’m aware that I’m picking this comment out from a broader context.)
There’s actually an active discussion about how/if we might populate generic “task” fields in pyproject.toml . We’re talking about it, but not necessarily on course to converge on something everyone in the thread likes. It might go somewhere but it might not.

I am not seeing the connection, even if we were to offer exactly the support you’re talking about here, to a 10 year language lifecycle. But it may have its own independent merits, and I encourage you to at least pop over to the Packaging section to take a look at the discussion, and maybe post your use case so that we can include it in consideration of this and future ideas.

3 Likes

This seems like a local policy, not an absolute requirement (“all software must …”). I work in some places with pretty savvy security people, and they do not have this requirement. Instead all packages are installed through a locally controlled and curated PyPI-like repository. Sure, we could create RPMs (or similar system package manager packages), but that just seems like busy work. I’ll grant you that maybe this isn’t a good practice for some reason they don’t know, but if so they’d like to hear why.

16 Likes

I have a lot of thoughts about these matters but I’ll just say the important part. :slight_smile: I have great respect and gratitude for the work that you and all the other devs do, and I apologize if I am pressuring you to do things you really don’t want to do.

I consider all of us here to share the common goal of improving Python for everyone, and that’s the spirit that underlies all my participation. But like any human I find that those underlying intentions sometimes don’t come across or are obscured by emotions of the moment or other less-important sentiments. It sounds like that’s happened here, and I’m sorry.

1 Like

No need to apologise. My comment was a generalisation, and doesn’t apply to anything specific here (although some of the OP’s comments got close to discouraging me from continuing to comment in this thread). But thanks for taking the time to make your comment - it’s appreciated.

3 Likes

It comes down to auditing. If you contrast tools like pip with tools like RPM, RPM does a lot more. It keeps a database of the package metadata including the checksums for every file and some category information (is it a doc, config, license, created at runtime). This allows you to do things like check “every” installed file on the system to know it’s source and if it’s changed since being installed. One way an IDS can be implemented is to leverage this information along with a configuration control system (ansible, puppet, etc) to validate all the files in all non-data, non-variable parts of the system. Then, for those remaining parts, you can use file identification magic to see if you have files of an unexpected type in an unexpected place.

Newer tooling like fapolicy assists in this by preventing execution of a file that doesn’t match a rule. One implementation is to leverage the RPM database so that only files installed through RPM/DNF can be executed. Certain parts of the filesystem, where you wouldn’t install files through a package manager (/srv, /tmp, /home), are typically restricted from allowing execution at the filesystem level. I believe that restriction is included in most of the standard benchmarks (CIS, DISA, etc).

Even if pip and similar tools for other languages had the same capabilities, you wouldn’t want multiple sources of truth that you may then have to deconflict.

1 Like

That’s just a local policy you are describing. There are plenty of other valid means of ensuring security and execution policy, and what you’re describing sounds like your security policy is conflicting with productivity and that nobody is properly advocating to use tools that allow you to use newer software while still maintaining the necessary security for your environment.

5 Likes

This has been my specialty for over 2 decades. This is pretty textbook. People think it’s ok to take shortcuts, and then they get hacked. And 99% of the time it was preventable if they had adhered to standard security practices.

Out of curiosity, would a tool that takes a pip managed venv environment and stuffs the whole thing into a fat RPM which copies the environment into some /opt prefix do the job? Then RPM would give you all the checksum auditing without your having to create a repository of RPMs for each package. That would also remove the need to trust whoever is creating the existing community RPMs.

The existence of other methods that can ensure security without requiring never using anything that wasn’t packaged in an RPM does not imply that shortcuts have to be taken.

You can deploy fully immutable distributions, you can audit without RPMs, there are other tools available. That your environment uses a very specific set of restrictions, and further more, a very specific set of tools to enforce those restrictions, is at odds with you wanting to use newer things not packaged in that exact way already.

2 Likes

I certainly see people do that, but it’s a bad practice. The issue there is if you find a vulnerability in a package, you then have to know that it’s bundled in with other software in one or multiple RPMS.

Yes, immutable systems are a valid approach, but they have their own requirements and aren’t suited to many use cases.

1 Like

That was there as an example of other options, not as an exclusive “try this.”.

If rpms were truly the only way to do this, it would be impossible lt audit any non-rpm distribution. This isn’t the case. I have a full SBOM generated for every machine and machine image. Most of those are not currently rpm based distributions.

without getting into operational details at my own job (they would not appreciate that), there are plenty of options available to you for the things you have claimed you are using fapolicy for without taking any shortcuts on security and while enabling use of more libraries without repackaging them as RPMs.

As an aside, some environments would go even further and require you build from source yourself and not use prebuilt community artifacts.

I would advise that you stop calling anything that isn’t your local policy “bad practice”, anything that can correctly accomplish the security goals of an organization should be acceptable as long as they are properly implemented and evaluated. Some methods streamline it more than others, some lock you into a very specific path, but work well so long as you stay in that path.

2 Likes

That makes a lot of sense, though in that case it would appear all those projects signed up for that.

This, and some of the other comments, has me thinking about the usability windows. At one extreme is feature driven like @elis.byberi not wanting to supporting anything without free-threading. I’m closer to the other extreme where I try to support the versions that shipped in Enterprise Linux and only drop support when there is a very compelling reason (can no longer test, doesn’t work, etc). There was (I can’t find it now) someone who only supported versions getting bug fixes. There’s those who stick to the 5 year cycle. And there’s likely others that fall somewhere in the middle. The Linux Kernel tried to address these different cases with interim, LTS, and SLTS. Maybe we don’t have to do it the same way. Maybe the Enterprise Linux case could be quasi supported. The effort to maintain the runtimes already happens in multiple places (Red Hat, Canonical, SUSE). There is likely some community benefit in doing this centrally with a common voice to communicate to tooling projects about support. We don’t have to call it LTS. We could call it Norwegien Blue (The dead parrot from Monty Python) :slight_smile: Not sure how to start something like that or if it would be better recieved, but it’s an idea,

1 Like

It would be better to convert each Python wheel into a RPM and then install all the RPMs separately. That would allow you to reuse RPMs or update individual Python packages independently from others. It’s not that complicated. I wrote an internal hack for research a couple of months ago.

Most Python packages in Fedora use that approach these days. They first create a wheel, then use the wheel’s content, metadata, and the extension module’s ELF metadata to generate RPMs with automatic provides, requirements, and extra packages. Python Packaging Guidelines :: Fedora Docs

It gets more complicated when you have to rebuild all Python wheels from sources with system packages. We have created Fromager to build our own wheels from scratch for Torch stack and several GPUs and AI accelerators.

3 Likes