Open Source Summit This year's Kernel Report at the Open Source Summit in Bilbao revealed the long-term support releases of the Linux kernel will soon not be that long at all.
Referring back to the list of stable kernels, which goes back about six years, Linux Weekly News editor Jonathan Corbet said:
When 4.19 reaches its end of life, so will 5.4, leaving the end-of-2020 kernel 5.10 the oldest longterm release. This change was discussed at the time, but now it looks like what is essentially the worst-case outcome has been chosen.
Since the big enterprise players don't use the existing long-term releases, maintaining their own instead, and as is now clear in the case of the 800-pound gorilla of the Linux world, keeping that kernel and its source code reserved for customers only, the kernel developers are dropping the very considerable maintenance burden of keeping those old releases alive.
For techie types tired of PowerPoint presentations and words like "synergy" and "leverage," Corbet's Kernel Report is one of the highlights of what used to be called LinuxCon.
This year's presentation [PDF] led in gently with some statistics, and held the big reveal about shortening support cycles back until later in the talk. There have been five releases in version 6 so far, and they are all of comparable size, with some 15,000 commits in each. Each point release has seen some 2,000 individual developers contributing, of whom around 250 to 300 are newcomers. As Corbet put it:
He noted that those are "mainline" releases, the ones Linus Torvalds himself puts out before moving on to the next new version. But, he said, "these are not the kernels that most of us are actually running. Most people are smarter than that and try to run something else; usually, something based off the stable kernel updates."
At present, there are six releases under long-term maintenance: versions 6.1, 5.15, 5.10, 5.4, 4.19, and 4.14 from back in November 2017. The number of commits into each of these long-term kernels increases for each older version, with the oldest having over 300 updates and by now about 28,000 commits.
So which should you choose? Corbet - himself a kernel maintainer - says that one answer to this is simple:
Manager of the stable kernels Greg Kroah-Hartman's explanation is:
Corbet went on to point out that the oldest long-term kernel is now approaching two full development cycles' worth of commits: "An awful lot of development is going on after our stable kernel is released." Many of the fixes are for bugs that appeared in earlier kernel releases - his analysis goes back to the beginning of the history in Git, back with kernel 2.6.12 in 2005.
And this goes both ways: bugs discovered and fixed in later kernel versions must be back-ported and fixed in older, stable releases.
This, he pointed out, is the other way to choose which kernel to run: pick an old, stable kernel, and backport all the fixes that you consider important to it. It's the "enterprise" kernel model, and leads to old kernels with thousands and thousands of fixes applied to them. "We've seen an awful lot of fuss recently about what certain enterprise vendors are doing with regard to access to their code and their distributions, and the fuss around Red Hat Enterprise Linux in general. This here is what that fuss is about," he said.
It's a lot of work, and although it applies across the whole distribution, a huge amount is at the kernel level. What you're left with are old kernels, but with tens of thousands of fixes, leaving them very unlike what was originally released. The result, Corbet said, is that it cuts their users off from community support.
In a way, this is the flip side of Red Hat's view, which we described back in June. Red Hat considers that the levels of testing and quality control in its enterprise kernels exceed that of the community ones. As a result, it maintains its own stable versions, and it doesn't use the upstream long-term releases.
That's when Corbet dropped his LTS bombshell.
There was a lot more in the talk, and we recommend watching it in full if you want to know where things are going. He discussed the new extended Berkeley Packet Filters (eBPF) subsystem, where it was going, and importantly, where it isn't - such as vendor-specific kernel schedulers. There was also discussion of security, confidential computing, and more.
The much-vaunted new Rust support, discussed in last year's Kernel Report, was intended to be experimental. Experiments can fail, and failure for the Rust module support would mean that it could be removed from the kernel again. It's already imposing a significant extra load, inasmuch as subsystem maintainers must be able to read code submissions in order to approve them or not, and that means reading Rust code, which Corbet described as sometimes resembling line noise to C programmers.
However, substantial work is already going into Rust kernel modules. Notably, as we described at the end of 2022, the drivers that the Asahi Linux team are working on for Apple's new on-die GPUs are being built in Rust. Corbet noted that the decision point where an experiment is considered a success is when the kernel developers merge the first feature that users depend on. As such, that point for Rust in the kernel is coming very soon. If the Apple Silicon GPU support does get merged, that's a quite significant user-facing function. It means that if Rust support were to be removed again, anyone running Linux on Apple Silicon Macs would lose their graphics drivers. As such, the Rust support would thereby cease to be experimental.
The closing words were on the subject of the ongoing maintainer crisis. The kernel team is understaffed, and there are no members of the team devoted to documentation, for one critical example. He quoted the former maintainer of the XFS filesystem, Darrick Wong, who stepped down in August:
From this vulture's perspective, it seems like despite the corporate feel of the Open Source Summit these days, with big companies from around the world proudly talking about their use of open source and their large-scale adoption of Linux, the core project behind it all, the kernel itself, is under-resourced and under-funded. If there are around a couple of thousand developers working on any given release of the kernel, and about 10 percent of those are newbies for each point release, that implies that as many again are leaving the project each release, burning out and quitting, or perhaps simply being redeployed to other areas by their employers.
As in so many things, although there is lots of good news, these are worrying times. ®
Latest Insider Build brings new features for Windows 365 Boot
Commissioned AI is changing the world, but your AI algorithms might need a diet of high-quality data captured at the edge
Webinar How using Retrieval Augmented Generation can enhance your AI development and deployment
It looks like everything is coming up HP. Do you want some help with that?
Has recent CEO, board shenanigans given rise to a merger situation? CMA is asking for a friend
This is release 0b11111111 (0xFF) - what could possibly go wrong?
Security boosted and inappropriate content blocked in large language models