Packaging Kubernetes for Debian
Ready to give LWN a try?Linux distributors are in the business of integrating software from multiple sources, packaging the result, and making it available to their users. It has long been true that some projects are easier to package than others. The Debian technical committee (TC) is currently being asked to make a decision in a dispute over how an especially hard-to-package project — Kubernetes — should be handled. Regardless of the eventual outcome, this disagreement clearly shows how the packaging model used by Linux distributors is increasingly mismatched to how software is often developed in the 2020s; what should replace that model is rather less clear, though.With a subscription to LWN, you can stay current with what is happening in the Linux and free-software community and take advantage of subscriber-only site features. We are pleased to offer you a free trial subscription, no credit card required, so that you can see for yourself. Please, join us!
A longstanding rule followed by most distributors is that there should be only one copy of any given library (or other dependency) in the system, and that said copy should usually be in its own package. To do otherwise would bloat the system and complicate the task of keeping things secure. As an extreme example, consider what would happen if every program carried its own copy of the C library in its package. Those thousands of copies would consume vast amounts of both storage space and memory. If a security vulnerability were found in that library, thousands of packages would have to be updated to fix it everywhere. A single library package shared by all users, instead, is more efficient and far easier to maintain.
This rule is thus contrary to the practice of stuffing dependent libraries into the package of a program that needs them — a practice often called "vendoring". Living up to this rule can be challenging, though, with many modern projects, which also often engage in a fair amount of vendoring. Projects written in certain languages appear to be especially prone to this sort of behavior; the Go language, for example, seems to encourage vendoring.
Kubernetes is written in Go, and it carries a long list of dependencies
with it. It was maintained in Debian for a while by Dmitry Smirnov, but
he orphaned
Kubernetes in 2018, stating that packaging it is "a full time
job, probably for more than one person
". The Kubernetes package was
eventually picked up by Janos Lenart, who has been supplying updated
versions to the Debian Testing repository.
Kubernetes vendoring considered harmful
Back in March, though, Smirnov made it clear that he was far from happy with how Lenart has approached the task of packaging Kubernetes. Rather than work to build Kubernetes with independently packaged libraries in the Debian repository, Lenart has chosen to vendor those libraries into the Kubernetes package directly. The Kubernetes 1.19.3 package contains over 200 of these libraries; the directory of applicable licenses alone contains 3MB of text. A README file added by Lenart notes that this approach may not suit everybody:
Smirnov denied being a purist, but was clearly upset about what had been done to the package he once maintained. It is, in his mind, a violation of Debian's policies. What, he asked, can be done in a situation like this?
The resulting discussion was lengthy and often heated, as one might expect. This being Debian, the developers devoted a long subthread to the question of whether Debian developers really have to verify the licenses for every vendored dependency (there was no definitive answer to that question). The reasons behind Debian's policies and the degree to which they make sense when applied to a project like Kubernetes were explored, also without any real conclusions.
Lenart posted exactly
one message to the thread, defending the changes to how Kubernetes is
packaged. There are other packages in Debian with vendored dependencies,
though none, he acknowledged, have anywhere near the 200 found in his
Kubernetes package. Independently packaging hundreds of dependencies is
not feasible, he said; Smirnov's attempts to do so has a lot to do with why
most Kubernetes releases never made it into Debian. Even if that effort
were to succeed, Debian's package would not use the versions of the
libraries tested by the Kubernetes developers and would thus essentially be
a fork that "no sane cluster admin would dare to use
". With
that many separate libraries, it would never be possible to get security
updates out in a timely manner. Go binaries are statically linked, so the
resource-consumption benefits of shared libraries are not available in any
case. And so on.
Smirnov, unsurprisingly, was not impressed with this list of justifications, and put some effort into casting Lenart as being too inexperienced to manage a package like Kubernetes. Many others argued for or against specific points until the conversation eventually wound down with nobody seemingly having budged from their initial positions.
To the technical committee
The topic then went quiet — on the public lists, at least — until the beginning of October, when Smirnov took the issue to the TC for resolution. The Debian TC exists to make decisions on technical disputes that Debian developers are unable to resolve on their own; it was this committee, for example, that finally answered the question of whether Debian would move to systemd or not. Now the TC is being asked to decide whether the level of vendoring seen in the Kubernetes package is acceptable.
There has been little public discussion since this request was filed, but a couple of interesting things have come out anyway. One was this message from Shengjing Zhu noting that Kubernetes, too, is a library that is depended upon by other packages. But Kubernetes is not packaged in a way that allows others to use it; doing so, Zhu said, would require decoupling all of its own vendored dependencies. Without that, every package that needs the Kubernetes library must vendor its own copy of Kubernetes, which does not seem like a rational path.
As part of the TC's deliberation, Sean Whitton asked the
Debian security team about the security implications of that level of
vendoring. Since security is one of the primary arguments against
vendoring, one might expect the security team to dislike the idea; the
actual response
from Moritz Mühlenhoff was somewhat more nuanced than that. Supporting
Kubernetes in a stable release is difficult in the best of situations, he
said, because upstream only supports specific releases for one year,
"and it would be presumptuous to pretend that we can seriously commit
to fix security issues in k8s for longer than upstream
". Given
that, there are two options that Debian could consider for this package.
The first of those options would be to just not ship Kubernetes in a Debian stable release at all. Debian users would then obtain it either from the Testing repository (which does not receive security support) or from outside of Debian entirely. The alternative is to just update Kubernetes wholesale whenever a security problem is disclosed and upstream is no longer supporting the version shipped by Debian. That is an unusual practice for Debian, he allowed, but Kubernetes users are already used to it.
Crucially, he said that if Debian ships Kubernetes in a stable release (and thus goes with the second option above), vendoring the dependencies as is being done currently is the only realistic option. Otherwise, the chances of a newer Kubernetes release working with the older versions of its dependencies shipped by Debian are small at best. Rather than impeding the security effort in this case, vendored dependencies appear to be the only way that the Debian security team could support Kubernetes at all.
In the end, the options listed by Mühlenhoff are probably the only ones available to the TC. The committee could try to mandate that the Kubernetes package be managed like others, with few (if any) vendored dependencies, but it has no authority to order any developer to actually do the work to make that happen. So such a mandate is highly likely to be equivalent to saying that Debian does not ship Kubernetes at all.
Not just Kubernetes
The TC has not given any indication of when it will make a decision on this issue. Regardless of the outcome, though, this issue is one that is likely to come up again. There is a small but growing set of free-software projects that are simply too unwieldy for most distributors to handle on their own. Beyond Kubernetes, web browsers clearly fall into this category. Distributors have generally given up on trying to backport patches to older browser releases; they just move their users forward to new releases when they happen. The resources to do things any other way just do not exist.
The kernel might in some ways be the original example of this kind of package, but with some interesting differences. The kernel, too, is a huge and fast-moving project; most distributors have no hope of trying to maintain an older release on their own. The distributors that do maintain such versions — in "enterprise" distributions usually — dedicate massive resources to keeping those kernels working and secure. Others depend heavily on the fact that the kernel project itself is now maintaining releases for several years; the 4.4 kernel has received 241 updates (at last count) with 16,422 patches. Debian is an interesting exception in that it does maintain old kernels for a long time, but that support, too, benefits from the kernel's long-term support work. In the absence of that support, most distributors would have to choose between not even pretending to keep their kernels maintained (a favorite choice of embedded vendors) or upgrading users to current releases.
The kernel, at least, is self-contained; most projects of any size accumulate dependencies quickly, and many current programming environments encourage tying dependencies to specific versions of libraries — through a relative lack of concern about ABI compatibility if nothing else. Such applications will be painful to package; Kristoffer Grönlund's 2017 linux.conf.au talk on the subject is still highly relevant.
In other words, the Linux distribution model that was first worked out in
the 1990s is increasingly unsuited to the way software is developed in the
2020s. Distributors understand that and are investigating ways to stay
relevant, including new package-management techniques, immutable
distributions, and more. Preserving the best of what distributions have to
offer while taking advantage of the best of what the software-development
community has to offer will prove challenging for some time. It is, as
some might put it, a high-quality problem to have, but doesn't make it easy
to solve.
Posted Oct 30, 2020 18:07 UTC (Fri)
by gwolf (subscriber, #14632)
[Link]
Posted Oct 30, 2020 18:42 UTC (Fri)
by smoogen (subscriber, #97)
[Link]
I think there is also a bit of 'distributions aren't sexy any more than sanitation engineering, and I really don't care about working in that sewer' from various ecosystems, developers, and even many power-users. [I think this is a sinusoidal wave.. for any software thing there is a timeframe where it is the THING everyone wants to get into: operating systems, window managers, (no)sql databases, blockchain, etc. Everyone dives in and comes up with their version or has to try out things etc.. then there seems to be a long counter-trough time where that thing is dead to everyone.. eventually coming back in a slightly different form. You can usually tell there is a different form because people would comment things like "you" are reinventing something from the IBM360, emacs, or Usenet. I expect in 2032, it will be "you are reinventing Debian packaging rules."]
Posted Oct 30, 2020 21:04 UTC (Fri)
by dxin (guest, #136611)
[Link]
Most Android apps have massive maintenaner and developer teams compared with the distro they run on (e.g. vendor's Android), so not only vendering could let the app teams deliver dependency updates more frequently than the distro could, they also have the resources to ensure their app works well with certain versions of dependencies. Hence Android went with full vendering, a decision that both mitigated and encouraged the infrequent deliveries of distro updates to the users.
For Debian and most of its packages, these conditions may not be true. E.g. apps are updated a lot less frequently in Debian than Android but users receive new Debian much more easily. Also most Debian apps care less about breakages caused by dependency versions. In this case shared dependencies indeed favorable.
Hence "Software written in 2020" means mostly "software written by well funded teams that gets updated frequently and has zero tolerance for breakages caused by dependency versions". The conflict again boils down to a "volunteer vs big corp" one. I think Debian people need to realize K8's dependencies are better handled by K8 just because they they have the manpower to do so.
Posted Oct 30, 2020 21:07 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (4 responses)
On the other hand, Go build is completely deterministic. You'll get a bit-perfect executable if you use the same version of modules (and the compiler, of course). This in itself is a pretty big deal for reproducible builds.
Posted Oct 31, 2020 17:16 UTC (Sat)
by jfred (subscriber, #126493)
[Link]
Distros like Guix have good reasons to want all packages to be packaged in Guix itself; for one, it allows them to more tightly constrain the build environment, which goes a long way towards ensuring reproducibility. Packages can't reach out to the internet during builds for example, which eliminates a big potential source of nondeterminism.
But that said... the Go package model just does not match up cleanly with the distro package model. As you mention, Go keeps the hashes of dependencies around, and there are quite a few dependencies - enough to make manually packaging all of them, with all the different versions required by different applications, a gargantuan task.
Go builds (unlike those in many other languages) are reproducible on their own, even without sophisticated package managers. And even if Debian starts packaging individual Go libraries, Go developers aren't likely to start using the versions shipped with Debian - it goes against the grain of what their language tooling expects. I'm inclined to think that distros in this particular case should step aside and let the language tooling do its job; or possibly, in Guix's case, to more tightly integrate with the language tooling, because Go's build system has a lot of the same goals that Guix's does. Distro packagers will simply not be able to keep up otherwise for any package with a nontrivial number of dependencies, and that's a lot of them these days.
Posted Nov 1, 2020 14:15 UTC (Sun)
by emorrp1 (guest, #99512)
[Link] (1 responses)
https://release.debian.org/doc/britney/solutions-to-commo...
Posted Nov 4, 2020 18:34 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link]
On the other hand, its ReactJS web-frontend that basically shows Google Maps and a couple of forms has 2120 dependencies. Simply building all of them as DEB files would take a LONG time, never mind doing the dependency resolution.
Posted Nov 4, 2020 18:28 UTC (Wed)
by jelmer (guest, #40812)
[Link]
The current overhead for creating a new package is significant if you have to manually package a lot of dependencies (such as for Kubernetes). Debian has certain assumptions about how much human-customized packaging overhead each package needs, and in the past that overhead was acceptable - shared libraries in e.g. the C world tend to be larger and less predictable.
For many of the newer ecosystems (go, rust, haskell, node) packages could be generated with a lot less overhead, because . The approach taken by the rust team with debcargo is promising - the overhead for adding an extra package is minimal, and makes it easy to add many more packages. See for example https://salsa.debian.org/rust-team/debcargo-conf/-/tree/m...
Posted Oct 30, 2020 21:35 UTC (Fri)
by khim (subscriber, #9252)
[Link] (14 responses)
The whole thing boils down to one simple fact: Linux distributions are not really an operations systems. What is an operation system? Well, Wikipedia says it's system software that manages computer hardware, software resources, and provides common services for computer programs… and Linux distributions are not doing that. There are no SDK, there are no ability for the developer of a computer program to deliver it's application to users… not even Apple (which is known for it's strict and heavy handling of developers) demand to cede that much control. And, that super-brand-new “software developed in the 2020s”… is not any different from Turbo Pascal or MS Office developed quarter-century ago. The only difference: today open source have become popular enough for the software that what was traditionally proprietary is now open-sourced. You can find source on GitHub or maybe some other such site. But that's it. It's developers are not embracing ideals of free software and are not looking for a way to integrate with distributions. On the contrary: they just seek a way to deliver their goods to users. Maybe it's time to accept ad acknowledge that and stop trying to pull all these packages into the distribution? But provide a way to install upstream versions? Something like these "Shops" other OSes have? Why does Kubernetes even have to be in Debian? What's the end goal? What this “packaging” which is just putting piece of software without any attempt to alter it is supposed to achieve?
Posted Oct 30, 2020 23:06 UTC (Fri)
by pollo (subscriber, #122775)
[Link] (12 responses)
It provides:
* security -- Debian developers are supposed to read an review the code they package
As a sysadmin, I can tell you all of this has a lot of value.
Oh, and it's false to say Debian developers don't alter software. They often write patches to fix specific problems and most of the time these patches end up being merged upstream.
Posted Oct 30, 2020 23:42 UTC (Fri)
by Wol (subscriber, #4433)
[Link] (9 responses)
At the end of the day, do the users care? Do the *users* trust Debian, or do they trust Kubernetes, and which do they trust more?
If the *users* are simply using Debian as a platform to run Kubernetes, then all these "advantages" are pretty irrelevant - all the users need is for Kubernetes to install and run successfully. Which is best achieved by Debian and Kubernetes working together with, hopefully, a couple of primarily Kubernetes devs also being Debian Developers tasked with making Debian a good platform to run Kubernetes on.
Yes I know Debian (or any other distro) needs to make sure all these big apps run successfully on Debian, but if they've got plenty of developers you're better off working with said devs to make sure the package installs successfully on Debian, rather than expending a lot of resources you haven't really got to make the package actually part of Debian.
Cheers,
Posted Oct 31, 2020 0:58 UTC (Sat)
by dskoll (subscriber, #1630)
[Link] (8 responses)
I certainly prefer my software to be packaged by Debian, or at least as .debs with a public repo. If we're talking one or a handful of machines, it's no big deal. But if you administer hundreds or thousands of machines, using the package manager to handle updates is hugely worthwhile.
I have a lot of trust that Debian upgrades won't break my system; this trust is in place because historically, very few Debian upgrades have been problematic. Having the Debian maintainers develop and test updates, that I can then confidently and easily deploy to many machines, is fantastic.
Back when I ran a software company, we put in the effort to make .debs and .rpms of our software (it was proprietary software, so would never be packaged by a Linux distro.) But making the effort to adapt our software to the distros we supported was very beneficial because it made our upgrades very easy.
Unfortunately, few upstream developers are willing to put in the time to make .debs, leaving that task to Debian maintainers.
Posted Nov 1, 2020 12:15 UTC (Sun)
by khim (subscriber, #9252)
[Link] (6 responses)
Having one repository with all the relevant applications is nice, I agree. But you don't need to build and package these by one central team. “Shops” which I mentioned work just fine for that.
As for “upstream developers” unwillingness to create Anything else just doesn't make any sense from support POV.
If Linux distributions are unwilling to provide a way for that to happen… developers would cobble together some kind of Some solutions would even provide And yes, I agree: it's false to say Debian developers don't alter software… but then situation is even worse: now developers need to deal not just with some unfamiliar (to them) environment, they even have to support code which they haven't wrote and don't even know about! No wonder that makes them unhappy.
Posted Nov 1, 2020 15:18 UTC (Sun)
by rgmoore (✭ supporter ✭, #75)
[Link] (5 responses)
Shops work OK at putting all the software in one place in an easy to install format. Experience says they do a very poor job of quality control and security updating, at least without effort on the part of the shopkeeper on par with what distributions currently spend on packaging. Even shops run by big companies like Apple and Google can't keep out all the malware and just plain insecure software.
IMO, this is the point that gets ignored by in these discussions: the biggest effort distributions spend is not really in packaging, it's in various forms of quality control. It may make sense to skip some of that quality control for the biggest, best maintained packages, like web browsers and databases, which are backed by the kind of big commercial team that can spend more effort on QA/QC than the distro can, but it won't work so well to skip that effort for smaller packages that can't.
Posted Nov 10, 2020 11:04 UTC (Tue)
by anton (subscriber, #25547)
[Link] (4 responses)
Maybe the overall contribution by Debian is positive, but it seems to me that some Debian people are too convinced of their importance, and would do better by just packaging the upstream with as few changes as possible.
Posted Nov 12, 2020 0:18 UTC (Thu)
by foom (subscriber, #14868)
[Link] (2 responses)
On the other hand, the upstream version of xpdf inexplicably refuses to let you copy text from certain PDFs, and upstream refuses to fix this. The debian version has fixed that bug.
Posted Nov 12, 2020 9:14 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
I guess Debian honour those flags to avoid potential legal liability. Sounds pretty typical for them.
Cheers,
Posted Nov 12, 2020 9:55 UTC (Thu)
by anton (subscriber, #25547)
[Link]
Posted Jan 10, 2021 1:32 UTC (Sun)
by debacle (subscriber, #7114)
[Link]
Posted Nov 2, 2020 11:48 UTC (Mon)
by LtWorf (subscriber, #124958)
[Link]
Packaging it would mean giving false expectations.
I would consider just placing it in contrib so it's not available by default. Maybe even make one of those fake packages that just downloads the binary from upstream, like they do for the microsoft fonts (because there is no license to redistribute them).
Posted Oct 31, 2020 9:17 UTC (Sat)
by lobachevsky (subscriber, #121871)
[Link]
systemd is patched quite heavily in Debian and the man pages are not fixed accordingly, so some stuff just silently doesn't work (mostly around localectl). Don't get me started on polkit, which is basically a fork of upstream polkit, because the maintainers just don't like the way polkit is configured since polkit 106.
At some point Debian will have to think about which things it deems absolutely necessary, because changes to upstream software and its own policies that were once sensible will be just dead weight making work on things people actually want to work on harder. There is a danger of newcomers taking their contributions elsewhere (directly upstream or to distros that are closer to upstream) and the project may die a slow death of attrition.
Posted Oct 31, 2020 9:44 UTC (Sat)
by ibukanov (subscriber, #3942)
[Link]
As for security auditing sources for anything sufficiently complex is beyond Debian or any other Linux distribution capabilities. I much more appreciate Debian efforts toward reproducible builds as that allows more people to contribute to auditing.
The stability argument is good one. For example, I appreciate stability that comes from Wordpress packaged in Debian. But I do not see how this is much relevant to vendoring.
Posted Oct 31, 2020 0:49 UTC (Sat)
by flussence (guest, #85566)
[Link]
Reduction of curl pipe sudo bash.
Posted Oct 30, 2020 22:31 UTC (Fri)
by ballombe (subscriber, #9523)
[Link] (10 responses)
Are software so bloated and insecure they cannot be supported for more than a year really the best of what the software-development community has to offer ?
Why not keep that label for software that deserve it ?
Posted Oct 30, 2020 23:20 UTC (Fri)
by Paf (subscriber, #91811)
[Link] (8 responses)
It might just be possible there is a reason that doesn’t involve the new software being crap. It might just be optimized differently,
Posted Oct 31, 2020 10:51 UTC (Sat)
by misc (guest, #73730)
[Link] (7 responses)
Posted Oct 31, 2020 20:31 UTC (Sat)
by jwarnica (subscriber, #27492)
[Link] (6 responses)
If upgrades are small, safe, and frequent there is no reason not to do upgrades at that rapid pace. System administration is no longer
"Vendor support" thus becomes "making *our* point releases seamlessly upgrade to *our* point releases in a specific path" not "supporting systems for ever". And k8s, even outside a commercial vendor, is very much focused on that theory of operation, allowing for zero downtime rolling updates across a cluster.
Actually, I'm not sure how k8s could manage a rolling upgrade if it was triggered from an OS level package manager.
Posted Nov 1, 2020 15:43 UTC (Sun)
by farnz (subscriber, #17727)
[Link] (5 responses)
From direct personal experience (moving the system my team at work is responsible for from monthly to daily deployment), the advantage of rapid deployment is that there's simply less for the triage team to think about when a user complains about a change, because instead of having to be aware of ~1.5 months worth of changes, you have to be aware of around a week's worth of changes.
The rate of breakage is about the same as it was with monthly deployment, it's just that there's a small amount broken and in need of fixing each day, instead of a huge amount broken each month.
Posted Nov 2, 2020 20:49 UTC (Mon)
by tpo (subscriber, #25713)
[Link] (1 responses)
There is another advantage: when a user comes up with a bug or problem, the first reply (can be trivially automated == no human ressources necessary, yay!) can always be: do you have the latest version from very few days ago?
Now the user has to put in effort to update. Once he has managed to go through the effort, which usually carries a non-ignorable cost, the latest release has moved on. Repeat.
At some point the user gives up and accepts the bug or plasters some workaround over it ("reboot machine", "reinstall", "reinstantiate container" ...) and you can close the bug with "no feedback".
Of course I am being sarcastic here, but exactly this is happening in reality in many a project I've encountered.
A related trendy flavor of this "I the developper am the center of the universe" approach is "there was no activity on this ticket in x days, closing".
That perspective stinks. I'm no exception to it.
Posted Nov 3, 2020 10:41 UTC (Tue)
by farnz (subscriber, #17727)
[Link]
Perhaps paradoxically, but we tell users asking for support to update much more rarely now that we're deploying daily than we did when we deployed monthly.
We've always used "are you up to date" as a way of dealing with issues that we are confident were fixed in a recent release; with daily deployment, our window for "recent release" has shrunk from 6 months to a week (something about the human brain seems to have us remembering what happened in the last 5 to 7 releases, and no more), which means that problems are much more likely to get investigated than they used to be, because there's no incorrectly remembered "recent" release with a fix for a similar problem.
Plus, because it's a rapid turnaround, we're happier to do a partial fix that solves the problem for you, and then use our time to fix up the resulting technical debt. In the past, because we had a long turnaround time, we'd prefer to avoid the technical debt at all - if it's going to take a month for the fix to get to you, we might as well aim for perfect first attempt.
Posted Nov 5, 2020 1:33 UTC (Thu)
by NYKevin (subscriber, #129325)
[Link] (2 responses)
This is less effective when you are rolling out to devices you don't control, because you'll have to tell the user "Hey, um, actually, we think you should go back to the old version for now" and users don't like that kind of churn. But it's great for servers, which is where k8s is supposed to run.
Posted Nov 5, 2020 5:49 UTC (Thu)
by jwarnica (subscriber, #27492)
[Link]
Yeah, there is some absolutely pure sense of Big O good and bad.
But most software is not built to run on a system we are building down to a cost of melted sand, or to where it is unchanging in its exactness hoping for a human to override.
Posted Nov 5, 2020 10:01 UTC (Thu)
by farnz (subscriber, #17727)
[Link]
Yeah - we do rolling deploys of our server updates, so at any time, some servers are on today's code, and others are on yesterday's code. If today's code is bad (manually noticed, or detected by automation), it gets rolled back and ignored, ready for the new version that drops later.
This does mean that we have to be careful to make upgrades easy and roll-back safe within limits, but the frequency of updates means this isn't an issue because anyone who forgets this gets burnt regularly until they learn.
Posted Nov 1, 2020 15:00 UTC (Sun)
by emorrp1 (guest, #99512)
[Link]
When ownCloud dropped out of debian, it was an indication to me that the software wasn't yet stable enough to be relied upon. Same thing happened with VirtualBox stopping targetted security support unless you upgraded the entire thing - that led me to discover virt-manager was a suitable replacement. Jenkins consider 3 months as LTS!
I am very glad for the growing number of ways that Debian provides to get software that compromises on some specific policies, while maintaining overall quality and making it obvious what is different about them (and I'm sure I'm missing some):
* contrib for FOSS that still requires something proprietary to make sense
Posted Oct 31, 2020 7:56 UTC (Sat)
by geuder (subscriber, #62854)
[Link] (24 responses)
From reading the article I understand there are 2 orthogonal problems
* there are 200 dependencies
The 200 dependencies are a problem as such. Debian packaging is an
It goes beyond my imagination why one would need 200 dependencies, but
When it comes to the linking of these 201 packages, with dynamic
Except that it depends how paranoid you are when building the
OBS on the other hand rebuilds every time. However, after having
Well easily, but who pays for the infrastructure? With shared
While Debian is sometimes somewhat too slow moving for my taste, is
Not something the TC can solve. Or after the free software movement we
Posted Oct 31, 2020 10:19 UTC (Sat)
by Wol (subscriber, #4433)
[Link] (2 responses)
Unfortunately Firefox today tells us that memory IS a problem.
"Only 4 gigs? You need to upgrade your ram!" except that pretty much all today's cheap laptops come with 4GB soldered and no expansion slot :-(
Cheers,
Posted Oct 31, 2020 12:31 UTC (Sat)
by geuder (subscriber, #62854)
[Link] (1 responses)
But seriously, I do use Firefox occasionally on a 4-5 year old 2 GB laptop. No problems as long as you understand not to keep more tabs open than what you using right now.
Obviously it's a bit slower than on more powerful machine. But still fully usable. Maybe you can find some page on the net where it chokes, but I don't remember hitting one.
Posted Oct 31, 2020 12:38 UTC (Sat)
by Wol (subscriber, #4433)
[Link]
But seeing as I've never had a machine without a DVD drive before (well, I have, but that was before DVDs :-), I don't fancy re-installing Windows which is the recommended fix for a slow PC. And pretty much all the recommendations for a slow Firefox are to upgrade your ram ...
Cheers,
Posted Oct 31, 2020 10:23 UTC (Sat)
by cyphar (subscriber, #110703)
[Link] (14 responses)
In openSUSE with OBS we don't package Go programs this way, we just use what upstream vendored or if they use go.mod we have an OBS service which can fetch the modules before build (these downloaded modules are not packaged). In either case, while OBS in theory could make managing this really nice, we don't use it that way. There are a couple of reasons why, which all boil down to how libraries are practically used in Go: Go libraries are commonly (though this is no longer as common as it used to be) imported such that they are pinned by commit ID. Therefore, different projects are unlikely to use the same version, meaning that you would need to package the library multiple times or otherwise maintain several versions. This is a problem you would immediately hit because of the fact that all of the golang.org/x/... libraries do not have version tags at all. (OBS can and does keep around old versions of packages for stable distributions but not for devel projects or Tumbleweed -- also you would need to be able to handle packaging a new project which uses a yet-unpackaged-but-older version of a commonly-used library.) Because of widespread vendoring, Go libraries aren't quite as strict about API compatibility as C libraries. This means that you really can't swap out dependencies from under the project -- so the obvious solution to the previous point won't work. (Yes, Go.mod and semver do in theory eliminate this problem but in my experience this is still a very common issue.) Libraries may have buildtags (used to determine whether to build a file or not and can be supplied at build time) which a downstream project sets based on some other configuration or feature detection. This means that for a single library you may have a power set of all possible subsets of buildtag combinations which could be set -- meaning you either have to build all of them or otherwise handle this situation. Go (as a language) doesn't really support compiling libraries separately from the downstream project which depends on them. You can do it, but the Go folks say it's not a supported setup -- especially in the context of a distribution. Note this is not an issue that only affects Go, most modern languages (as well as oldies like Perl and Python) have their own package managers and package ecosystems. When trying to package such programs into a distribution, distributions are kind of stuck. What we have historically done is (in an automated fashion) wrap every language package needed by a project into a corresponding distribution package and hope that there aren't too many to deal with. Go managed to break this model by not having packages or package registries at all and instead relying entirely on import paths. So we ended up just using vendoring (which is what upstreams also used). Debian bucked the trend and tried to "un-vendor" all Go projects, but the net result is a massive amount of extra work done by one distribution without any other distributions helping out. Personally I think this situation would be much nicer if distributions and languages would have some way to merge their respective package ecosystems into a single package namespace, which would permit distributions to directly reference the native language packages -- making it easier for developers to have their software packaged and easier for distributions to keep track of security issues and maintenance. But naturally -- because we're talking about free software ecosystems -- this isn't going to happen because it appears that we all deep down thrive on having fragmented communities.
Posted Oct 31, 2020 12:19 UTC (Sat)
by geuder (subscriber, #62854)
[Link] (3 responses)
I know, I had this in mind, but then my comment already got too long. So I skipped it. It would be another argument against ever growing bloat and workload. You need to keep several versions around and patch them in case of vulnerabilities.
It seems to be a development we should not participate in.
Posted Oct 31, 2020 14:25 UTC (Sat)
by cyphar (subscriber, #110703)
[Link] (2 responses)
Posted Oct 31, 2020 15:21 UTC (Sat)
by mathstuf (subscriber, #69389)
[Link] (1 responses)
Maybe someday we'll have the ability to use Rust APIs across a shared library boundary (sure, you can technically do it today, but limiting yourself to the C ABI for inter-library calls and compiler checks is quite restrictive in the Rust world).
Posted Oct 31, 2020 15:45 UTC (Sat)
by cyphar (subscriber, #110703)
[Link]
Yeah that is definitely true for Rust -- and I would argue this is because Rust grew the idea of strong versioning very early on (before Go modules you could argue that this wasn't a first-class feature in Go for most of it's lifetime). In theory Go modules might improve this situation, but to be honest I'm fairly skeptical that this will change enough Go projects that it will allow us to change how we would package the majority of them.
> Maybe someday we'll have the ability to use Rust APIs across a shared library boundary (sure, you can technically do it today, but limiting yourself to the C ABI for inter-library calls and compiler checks is quite restrictive in the Rust world).
This would make packaging more efficient in practice (only compile libraries once and reuse the compiled files as much as possible) -- and would be great to see -- but in principle the organisation of packages doesn't need to mean that library packages are actually distributed as compiled code. If you have a package for each (library) crate dependency which is used by a binary crate you can still do the actual compilation in the final binary. In this instance the issue is more about being able to represent crate dependencies using the distribution's method of representing such dependencies (BuildRequires in RPM).
Posted Oct 31, 2020 14:42 UTC (Sat)
by Conan_Kudo (subscriber, #103240)
[Link] (1 responses)
Well, the reason openSUSE doesn't have Go dependencies de-vendored like both Debian and Fedora do is because openSUSE can't handle it. There are only 10 people who can review packages to merge into openSUSE Factory, and 7 of those 10 people have done zero reviews in the past three years. The effort to de-vendor Rust in openSUSE stalled on that fact alone, too. The openSUSE review process does not scale like the Fedora package review process does.
Fedora has been de-vendoring Go dependencies for years now, and we have a high level of reuse across applications using Go dependencies. We did not do it for Kubernetes (or Docker/Moby for that matter) because historically both projects forked and modified their dependencies when they vendored them. In this case, we declare them accordingly as bundled libraries, audit the licensing accordingly, and move on.
The transition to Go modules is actually expected to improve things even more for de-vendoring, since Go modules are required to establish an API contract with their versioning. This allows us to more freely upgrade dependencies with confidence. We have tooling to trigger rebuild chains as needed, even though I would like OBS-like auto-rebuild magic to make it even simpler.
Posted Oct 31, 2020 15:48 UTC (Sat)
by cyphar (subscriber, #110703)
[Link]
> The transition to Go modules is actually expected to improve things even more for de-vendoring, since Go modules are required to establish an API contract with their versioning.
That is the idea (and a laudable one), though to be honest I'm a little pessimistic that much will change for the vast majority of projects.
Posted Nov 2, 2020 15:05 UTC (Mon)
by LtWorf (subscriber, #124958)
[Link] (4 responses)
It is a language developed at google to run on their own internal systems. When they made it they just wanted a binary blob so that whatever distributions they might use internally that month, it would just work.
It was never the idea to make go programs for others.
Posted Nov 2, 2020 16:58 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link] (3 responses)
The static compilation has always been a welcome feature. I remember that I was introduced to Go by HPC (High Performance Computation) folks who really LOVED that they could compile a binary locally and run it on a cluster, without caring for mismatching glibc versions.
Posted Nov 2, 2020 20:14 UTC (Mon)
by smurf (subscriber, #17840)
[Link] (1 responses)
"It's a general purpose language" and "it's not designed for distribution" is not a contradiction.
Go binaries can be distributed like hell. Go sources? not so much, esp. if by "distributed" one means "included in a distribution". You basically need network access, github (and whatever else the code in question and/or any of its its dependencies want) needs to be up and reachable, and so on.
If you plan to work on a Go program offline, you basically need to start the compiler once, before disconnecting from the net, so it can cache all that stuff.
And that's the polar opposite of what a distribution is trying to accomplish.
Posted Nov 3, 2020 0:43 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link]
But even module-based Go doesn't need Github access, it needs a way to resolve packages and you can use one of the many repo managers for that (like Artifactory).
Posted Nov 13, 2020 3:19 UTC (Fri)
by nivedita76 (guest, #121790)
[Link]
Posted Nov 6, 2020 5:46 UTC (Fri)
by jccleaver (subscriber, #127418)
[Link] (2 responses)
I mean, isn't that the crux of the matter? Perl has had CPAN since the days of RH 5, and once cpan2rpm was reliable to use in a mostly-automated fashion it made keeping that in sync for trivial things... trivial. It's only the more involved things that require much manual tweaking, but that's pretty much how it should be since that's why you have humans in the loop to begin with.
Go decided it didn't care, and more to the point the people pushing it decided they didn't care, and now we're stuck.
I can't speak for the Go or rust ecosystem, but I still have difficulty understanding why something like k8s or terraform couldn't be re-implemented in a more traditional language which doesn't force an up-ending of ecosystems that are tried-and-tested.
Posted Nov 6, 2020 5:48 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (1 responses)
Posted Nov 6, 2020 10:44 UTC (Fri)
by amacater (subscriber, #790)
[Link]
The comment about security auditing somewhere above in this thread: it's not great, but it's easier in a distribution than a randomly structured ecosystem like NPM / PIP.
All of that may be completely immaterial if you're Alphabet/Facebook/AWS and can afford to throw human resources at the relevant language or system for internal use: external use is merely good publicity but you don't have to guarantee users there anything. And yes, I'm coming at this from 25 years of experience with Linux so I may be completely out of touch/too old to appreciate how the real world works.
Posted Nov 2, 2020 0:32 UTC (Mon)
by ringerc (subscriber, #3071)
[Link] (5 responses)
... and in 2000 and 2010 and 2020, people have been making big money selling tools that mitigate the pain of Java memory management. External heaps, custom allocators, you name it.
Disk space (for software) isn't that different. We don't care about a few MB here or there. But when building and deploying 100 containers on a machine suddenly it starts to matter again.
Waste is still waste.
Personally it outrages me that I need 16GB of RAM to meet the voracious memory gobbling of a web browser, and still be able to code. I can do database development in less than gig of RAM, but somehow a Modern Social Media App or two consumes a giabyte per tab and a fair slice of the CPU if you let it too.
Efficiency is a cost. We do need to be willing to accept more bloat to save time. Nobody should be using short identifiers to save space in dynamic link tables. There's generally much less need to apply complex and fragile schemes to save a few bytes of memory here or there.
But short term efficiency gains can cause long term pain. Vendoring of libraries rapidly spirals out of control if you start modifying them in your trees. Sure you save time, but you enter a nightmare of slightly-forked upstreams to deal with and a spiral of unmanageable tech debt.
Also, small only stays small if the count is low. A PostgreSQL row header is 23 bytes. That's nothing right? Well, one database I support has about 1TB of row headers consuming extremely expensive high performance storage - plus more being eaten by alignment padding, VARLENA type headers, page headers, etc. These things add up.
Nobody's going to argue that malloc() can afford to throw around chunks of memory for developer convenience because RAM is cheap now.
Posted Nov 4, 2020 14:26 UTC (Wed)
by khim (subscriber, #9252)
[Link] (1 responses)
Why would they argue if they can just use tcmalloc which does just that? It's used in web-browsers, too, BTW.
And the fact that web-browsers are using muti-process model for security which increases memory consumption about 2x-3x is just the fact of life now, too…
Posted Nov 6, 2020 1:10 UTC (Fri)
by ringerc (subscriber, #3071)
[Link]
Posted Nov 5, 2020 8:41 UTC (Thu)
by ncm (guest, #165)
[Link] (2 responses)
If the browser wants to cache more stuff from recently used tabs, that should be something tunable. But gigabytes of stuff I can't even see squatting on memory I need for things I am actually doing is an abomination.
Posted Nov 5, 2020 18:03 UTC (Thu)
by mpr22 (subscriber, #60784)
[Link]
"Dump DOM state, necessitating a page reload and possible loss of work when switching back to the tab, if the user switches to a different tab for more than X minutes" is a perfectly reasonable behaviour to want, but it is not remotely a sane default behaviour.
Posted Nov 6, 2020 2:30 UTC (Fri)
by ringerc (subscriber, #3071)
[Link]
Increasingly a browser is a (terrible and inefficient) application runtime that maintains persistent state including connections to remote hosts.
You can't easily serialize the Javascript runtime state and the DOM to disk if you want to maintain remote connections. You'd need at least a way for part of the application to remain resident to respond to server side chatter / keepalives etc, or changes to redefine client/server expectations for idle and background tabs.
What I'd really like is a way to set resource limits on webapps. Impose some pressure on webapp developers, so they stop offloading gigabytes of epic JavaScript horror to onto all their users because there's no incentive for them not to.
Posted Oct 31, 2020 19:41 UTC (Sat)
by famzheng (subscriber, #121411)
[Link] (1 responses)
Posted Oct 31, 2020 20:02 UTC (Sat)
by amacater (subscriber, #790)
[Link]
Pip / NPM - the stability of concrete made with quicksand (and rock sugar rather than gravel) and about as much maintainability and longevity - and that's before any considerations of non-existent security, typosquatting and lack of reliable attribution.
Long time Debian developer - but speaking entirely from my own experience and prejudice and without any attribution to the wider Project whatsoever.
Posted Nov 1, 2020 6:30 UTC (Sun)
by jafd (subscriber, #129642)
[Link] (5 responses)
Posted Nov 1, 2020 17:45 UTC (Sun)
by nkiesel (subscriber, #11748)
[Link]
Posted Nov 2, 2020 10:44 UTC (Mon)
by jak90 (guest, #123821)
[Link] (3 responses)
https://www.vitavonni.de/blog/201503/2015031201-the-sad-s...
Posted Nov 2, 2020 15:12 UTC (Mon)
by pabs (subscriber, #43278)
[Link] (2 responses)
Posted Nov 4, 2020 14:29 UTC (Wed)
by khim (subscriber, #9252)
[Link] (1 responses)
Posted Nov 23, 2020 13:50 UTC (Mon)
by jak90 (guest, #123821)
[Link]
Posted Nov 2, 2020 1:13 UTC (Mon)
by grantma (subscriber, #5225)
[Link] (2 responses)
Why not just package Kubernetes against Debian stable as distributed, and make it available via sid and the volatile or backports repository, and not part of the stable/testing releases? Article makes it clear its not suited for a standard release. That way you can always get the latest Debian specific Kubernetes with bug and security fixes for your server environment.
Posted Nov 3, 2020 21:03 UTC (Tue)
by smcv (subscriber, #53363)
[Link] (1 responses)
Various people have proposed adding a new suite/repository/archive area/thing that would accept packages that are intended to be used *with* the stable release but are not suitable to be shipped *in* the stable release (a bit like the apt repositories published by upstream projects like Docker and Salt, but general rather than package-specific, and within Debian); but that isn't something that a single package's maintainer can do unilaterally.
Posted Nov 4, 2020 3:56 UTC (Wed)
by pabs (subscriber, #43278)
[Link]
Posted Nov 4, 2020 10:53 UTC (Wed)
by gdt (subscriber, #6284)
[Link]
Posted Nov 5, 2020 9:00 UTC (Thu)
by ncm (guest, #165)
[Link] (1 responses)
Suppose we just said that for any Go dependency, the package installer just does a "git clone" or "git pull" of upstream source. Then, to install any Go program the installer clones or pulls and compiles and links the program on the spot, taking the right revision from each dependency's local repository.
This is different from what we are used to for compiled programs, but a lot like what we already do for scripting languages. Go compilation is fast enough that there is not so much difference from a scripting language.
There are still a lot of dependency packages, but the amount of work to maintain them drops to near zero, as you only change them when their own dependency list changes.
This method would not work so well for Rust because the Rust compiler is so damn slow, but Rust people are not fighting with package management. Even for Rust, though, linking as part of installation could reduce problems.
Posted Nov 23, 2020 17:48 UTC (Mon)
by emorrp1 (guest, #99512)
[Link]
So that the user doesn't have to know that application X is written in language Y before even being able to install it and know it'll work. Distros like Debian do have a lot of complexity to support language specific ecosystems, but that's all done on behalf of the end users to (hopefully) fit their expectations of how an OS behaves.
> to install any Go program the installer clones or pulls and compiles and links the program on the spot
Which is fine for developers, or for source based distros, but why should an end-user buy dev-spec hardware so they can install some software - one of the advantages of a binary package is the compilation can be offloaded to beefy servers.
Posted Nov 5, 2020 10:33 UTC (Thu)
by flussence (guest, #85566)
[Link] (10 responses)
In the long term I think the answer to this is going to require better acceptance that those other worlds exist, and ways to sandbox their packaging tools and extract build artifacts so they can be used “as intended” while preserving the OS's integrity.
Posted Nov 5, 2020 15:14 UTC (Thu)
by smurf (subscriber, #17840)
[Link] (9 responses)
These issues aren't going away just because you use Python or Perl or Go or JS instead of C. There have been a few big-ticket C libraries with dependency issues that the distros ran afoul of that took years to untangle; ffmpeg comes to mind.
K8s's Go package dependency hell is just the most extreme example of a program written in a language whose library infrastructure decidedly is not up to the task of providing that kind of stability. At the moment it's not packageable, period, no matter what the system which would like to package it is is or is not geared for. You can find examples of that in other languages, true, but Go and JS celebrate this kind of chaos and seem to regard it as a feature.
Maybe that'll change. Maybe not. In the meantime, Debian's best solution is to package an installer and then keep out of its way.
Posted Nov 5, 2020 16:20 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (8 responses)
Go provides fully deterministic and replicable builds. If I check out a Go project then I'm guaranteed to get byte-for-byte identical build environment to that on the developer's machine. There are zero surprises.
It is IMPOSSIBLE to do that with classical Linux packaging methods. My environment will be subtly different to that of the package developer, unless we install everything at the same time.
So the proper question should be: "How Debian can achieve Go's stability?"
Posted Nov 5, 2020 16:46 UTC (Thu)
by pizza (subscriber, #46)
[Link] (5 responses)
So... if you take two different versions of a random Go library, you are saying that can never be API (and thus, ABI) differences between them?
> Go provides fully deterministic and replicable builds. If I check out a Go project then I'm guaranteed to get byte-for-byte identical build environment to that on the developer's machine. There are zero surprises.
And if you revision control a C-based project and all of its dependent libraries (as well as identical toolchains) you won't have any build-time surprises either.
(I mean, duh, if you change nothing, nothing will be changed)
Posted Nov 5, 2020 16:57 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (4 responses)
> And if you revision control a C-based project and all of its dependent libraries (as well as identical toolchains) you won't have any build-time surprises either.
> (I mean, duh, if you change nothing, nothing will be changed)
Posted Nov 5, 2020 18:04 UTC (Thu)
by pizza (subscriber, #46)
[Link] (3 responses)
Yep. But this has led to the current situation where the libraries themselves are anything but stable, so in practice updates rarely happen, bugs be damned.
> Sure. Except Go doesn't require me to commit all libraries into the same project, it uses a coherent module system for that. The project code simply stores the SHA hashes of dependent libraries.
... _and_ statically links everything into a standalone binary, so any library/dependency change necessitates a rebuild.
(And you know what? Apply the "rebuild all dependents" rule to "classic distributions" too, and your "subtle ABI differences" argument vanishes entirely)
> Sure. But there's no way to do that with classic Linux distros. You will basically always get a subtly different build environment, as packages get updated without changing their name.
If I install two systems off the same install media, and configure them identically, the results are going to be the same.
("but her updates!" you say? to which I point out that you're explicitly not updating anything in Go-land either; if it's good for Go, surely it's good for everything else too?)
Posted Nov 5, 2020 20:27 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
If anything, bugs are fixed much more readily because authors are not afraid that they might accidentally break some old software that depended on some arcane accidental behavior.
> ... _and_ statically links everything into a standalone binary, so any library/dependency change necessitates a rebuild.
> (And you know what? Apply the "rebuild all dependents" rule to "classic distributions" too, and your "subtle ABI differences" argument vanishes entirely)
Non-classic distros like Nix (and I think guix) do allow it.
> If I install two systems off the same install media, and configure them identically, the results are going to be the same.
Posted Nov 5, 2020 22:52 UTC (Thu)
by ballombe (subscriber, #9523)
[Link] (1 responses)
Posted Nov 5, 2020 23:23 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Nov 5, 2020 18:44 UTC (Thu)
by smurf (subscriber, #17840)
[Link] (1 responses)
That is by definition impossible, and IMHO it's anything but an advantage.
And no, stability is not impossible. The package developer doesn't build the binary. The distro's build system does that, in a controlled environment.
> There are zero surprises.
There also are zero security bugfixes to any of the libraries you depend on. Thanks but no thanks.
Posted Nov 5, 2020 20:34 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link]
> There also are zero security bugfixes to any of the libraries you depend on. Thanks but no thanks.
And having better tooling to do that would be great. Github has a proprietary thingie (Dependabot) for that, and OpenSource solution to do the same would be awesome.
Posted Dec 6, 2020 13:00 UTC (Sun)
by jnxx (guest, #142729)
[Link]
Strictly from the license, it is open source, as it uses the Apache License 2.0. But to build it from source, one has not only to use the bazel build system, but the bazel build scripts load down dependencies and code from over 200 internet locations which when building runs and downloads even more code from, even more locations.... it seems (to me) humanly impossible for a package maintainer to tell what all this code does, let alone to take any responsibility for this. So, it is "Open Source" but not actually in a way that provides real control to the user and owner of the computer, which is what free software is all about.
And this is merely an extreme of the problem that more and more tools and libraries suggest to install using something curl | bash. Even Rust does this (which is inconceivable to me because it tries to appeal to infrastructure people which usually do not fancy running untrusted code).
I think that Guix might be a partial solution to these problems. It seems to work very well with Rust. Perhaps it could help not to see it as simply some competition to Debian, but as a kind of complement, because I believe that many of the principal goals of both projects are very similar if not identical. Guix can build reproducible, fast-moving software from source with ease, and it can do that on top of a really solid and stable system like Debian.
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
* convenience -- a single package manager for all the software on your system, automated updates, etc.
* stability -- Debian developers work hard to try to make Debian a cohesive environment
Packaging Kubernetes for Debian
Wol
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
.deb
s… nothing have changed there in last quarter-century. “Upstream developers” want to build one package and distribute one package.
.sh
script. Or some kind of installer. Or would solve problem in some other way..deb
s or .rpm
s — but they would use the same binaries as “main” tar
or sh
.
Packaging Kubernetes for Debian
“Shops” which I mentioned work just fine for that.
For some packages, the contribution by Debian is negative. E.g., for many years now, I have needed to build xpdf from source on every Debian/Ubuntu system because the Debian-crippled version only prints on letter paper (which we don't have around here); all configuration options for A4 paper (both the upstream xpdf ones as well as the Debian ones) are ignored.
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Wol
xpdf does not allow to copy text from PDFs that have been marked accordingly, and that was documented (I don't find that documentation at the moment, though). So it's a misfeature, not a bug; however, the documentation explained that it is easy to change xpdf to ignore the flag, so if that's what Debian's maintainers want to do, I don't think they need to ignore all sources of the desired paper size to do it.
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Maybe you should reopen it, if setting of /etc/papersize to "a4" does not work on your system.
I just tried to print an A4 PDF into a PS file using xpdf 3.04-14 on Debian testing and the result is A4.
Works for me, but that's 19 years later ;-)
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
split between firefighting and planning and managing huge lift-and-shift operations, it day to day minor changes. Or should be.
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
* backports for new features in leaf packages without waiting another two years
* fasttrack for well supported upstreams that haven't yet got stable LTSs
* extrepo for third party repositories with proven track record
* flatpak for ignoring packaging integration in favour of following the upstream releases
Packaging Kubernetes for Debian
is close to zero, my knowledge about Debian builders is zero, I have
worked with OBS (https://openbuildservice.org/) quite a bit, but years ago.
* Go uses static linking instead of dynamic linking
overhead, doing the overhead 200 times in a your free time as single
maintainer is clearly not feasible. (That applies to any distro
packaging, whether it's Debian, rpm, Arch or whatever. I have the
feeling Debian could be a bit more of work, but probably an
experienced Debian packager will disagree.)
obviously that's the way software works in 2020. Whether that's
progress or a destructive development is a good question. At least in
security-critical software (and what is not security critical if we
have real money and the internet involved?) I would claim it's just
not sustainable. But let's leave that for another debate.
linking you have runtime dependencies. With static linking you have
buildtime dependencies. But every runtime dependency is also a buildtime
dependency, so for building the software it is not even a
difference. (I don't go into the space argument. Already Java told us
in 1995 that memory is not a problem with modern computers.)
dependencies. As said I know nothing about Debian builders, but I
have sometimes heard the term of mass rebuild. So my guess would be
that Debian generally assumes that changing something further down the
dependency chain does not require rebuilding the whole chain, because
you have ABI compatibility. Only if you know that your ABI breaks you
do a (manual?) mass rebuild. Is that how it works? Obviously with
static libraries every change is a breaking one, you need to rebuild
everything that depends on the changed one.
rebuilt a direct dependency it compares the build results with the
previous ones it has built for the same package. If they are
equivalent it has been proven that the change was ABI compatible, the
new results are thrown away and rebuild chain is cut at that step. So
using OBS you could really "easily" manage 200 Go packages?
libraries the rebuild chain is typically cut after the first step,
with static libraries it will never be cut anywhere. The same holds for
downloads from the repo and installation. Cost will grow everywhere.
this really a Debian problem? Isn't this Go and static linking just an
economically unsustainable development? (The environmental impact
might be smaller, because buildtime impact is probably relatively
small compared to runtime impact globally.) Not for Google maybe, but
every other more resource constrained organization. Do we need a
software developers' "Fridays for Future" to show that this is
nonsense? Somehow I ended up at the sustainability issue again,
although I said above I would leave that for another debate. It
appears to me that this theme needs more attention in software
development.
need the slim software movement? Just don't use proprietary^W bloat
software???
Packaging Kubernetes for Debian
Wol
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Wol
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Why should somebody bend over backwards to make stuff easier for Linux distros?
Packaging Kubernetes for Debian
Looking from a distance at OpenStack / k8s - there's still a lot of magic, blessed github repositories or whatever around this if you don't go with a single vendor stack (Red Hat/Canonical are effectively vendoring their commercial offerings and depend on you having paid for support).
Packaging Kubernetes for Debian
> Nobody's going to argue that malloc() can afford to throw around chunks of memory for developer convenience because RAM is cheap now.
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Bootstrapping C is one of the core issues of the Bootstrappable project, but so far they have set up a pretty straightforward path of hex dumper -> machine monitor -> assembler -> Scheme interpreter -> naive C compiler -> simple C compiler (like TinyCC) -> GCC 4.7 -> modern compilers written in C++.
With Java, they started out with a Java 5 compiler written in C++ and gradually bootstrapped a JVM and classpath on top that understood enough Java 6 features to build OpenJDK 6 from it.
https://bootstrappable.org/projects/java.html
As far as I have read, there's still a multi-stage approach to bootstrap the Mono compiler from C, and Microsoft compiled their "native" C# compilers using their own C++ compiler until switching to Roslyn.
https://news.ycombinator.com/item?id=7528264
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
I violently disagree with that. I have NEVER had an issue with Go's libraries breaking the ABI, while I've had more than one issue fighting with subtly broken C-based ABI.
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
No. I'm saying that a project won't accidentally be broken when one library gets updated. Breakages happen only when a project explicitly updates its dependencies.
Sure. Except Go doesn't require me to commit all libraries into the same project, it uses a coherent module system for that. The project code simply stores the SHA hashes of dependent libraries.
Sure. But there's no way to do that with classic Linux distros. You will basically always get a subtly different build environment, as packages get updated without changing their name.
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
I'm developing mainly in Go for the last 3 years and honestly this hasn't happened once. In my experience bugs are fixed rapidly and have never resulted in API breaks.
Not quite. You can have dynamic libraries, it's just that you're allowed to install multiple versions of them.
But classic distros don't allow to do that easily. I know, I tried. You have to do stuff like spinning up your own mirror with snapshots of package lists.
Now do that with Debian or Fedora net install. You will get a different image if the code is updated between installations. And there are no real ways to tell: "Debian Unstable as it looked at 2020-03-31 21:12:41 UTC".
Packaging Kubernetes for Debian
Sure there is, see https://snapshot.debian.org
Packaging Kubernetes for Debian
Packaging Kubernetes for Debian
One of the major points of assembling a distribution is to have exactly one version of any library on the system, so that if there is an issue with any one of these libraries you update this exact library and majickally fix the issue for every program using them. Instead of, say, rebuilding and re-deploying ten Go programs.
Packaging Kubernetes for Debian
Distros do almost exactly zero QA, though. So if anything is broken, it'll be discovered by users. This results in very slow moving "stable" distros as a result.
Well, Go is fairly safe in itself so security fixes are fairly rare. When they do happen, you'll have to update dependent applications, it's true.
Packaging Kubernetes for Debian