* Posts by containerizer

38 publicly visible posts • joined 23 Jun 2023

Fedora Asahi Remix 41 for Apple Macs is out

containerizer

Re: Mmmmmm

the cores may or may not be the same, but it probably does not matter given that they are implemented to comply with an ARM spec.

I believe the other hardware inside the SoC does change around. All the audio, video, network/wifi drivers, graphics etc are on-chip and I imagine they're continuously updated.

Public developer spats put bcachefs at risk in Linux

containerizer

Re: Are we reaching a monolithic limit?

> But they are trying to do something no other Unix project has ever done: the aim is that you can stick a HAMMER2 volume on a shared connection and multiple independent OS instances can all mount it at once.

It sounds like you're describing a cluster filesystem. This has been done many times in the UNIX world - Veritas, GFS[2], GlusterFS etc.

Fedora 41: A vast assortment, but there's something for everyone

containerizer

The IBM POWER workstation/server line (formerly known as RS/6000) would be my guess ..

The US government wants developers to stop using C and C++

containerizer

Re: It's not the language, it's just the way it's "talking"

> The fact that rust has this "unsafe" directive (or what ever it is called) means that the language designers absolutely know that the language cannot do what people want to do with in in a memory-safe manner.

I can think of very few cases where this kind of operation would be necessary. The one obvious one is where you have to manipulate memory that isn't really memory, such as when you're accessing memory-mapped hardware within a kernel, or certain other low-level operations where something else is handling memory for you.

This does not mean the language is bad. At the very least it shows that unusual memory usage which cannot be tracked by the compiler is at least auditable and the compiler can emit diagnostic warnings when they are used (or enforce against their use). In C there is no standard way to do this.

You cannot build a programming language which is foolproof in all scenarios. You can, however, build one which minimises bugs caused by human error. I think that's arguably a win over a language which pointedly does not.

containerizer

Re: It's not the language, it's just the way it's "talking"

> The precise issue is that C the language is not "memory unsafe" because it doesn't do memory management, the libraries and apps do that.

I am sorry, but you are very wrong.

Managing the stack is a form of memory management and the C language does this. If you declare a variable or an array etc on the stack, or take a pointer to it or dereference it, the compiler generates code for this purpose. Off-by-one errors and other stack-related bugs are a very common cause of instability or security bugs. That's a fundamental feature of C.

The use of malloc() and free() are discussed in the original K&R book and are defined in the ISO C99 spec (safe to assume they are in ANSI C89 and in subsequent ISO specs). They may not be operations that are directly converted into machine code by the compiler but that doesn't mean you can get away with saying they are not part of the language.

If you want to talk about losing credibility, scoring pedantic points by trying to suggest that the language and the support library which forms part of the language specification are not intrinsically linked seems like a good way to do that.

It's about time Intel, AMD dropped x86 games and turned to the real threat

containerizer

It would take several pages to explain in detail all the different ways where you are utterly wrong. But in summary :

- I really do not know where to start with the notion that only CISC CPUs can perform "complex mathematical and algebraic operations" with "extreme precision and efficiency". SPARC, MIPS and PowerPC have lengthy track record here, being used for CGI in films, by the oil & gas industry, financial services sector etc etc.

- A RISC is not inherently a "low power part for small form factor devices". The earliest RISC CPUs were used to build servers, workstations and mainframes. IBM dominated enterprise computing with the RS/6000 workstation, and its S/390 CPUs were a CISC ISA running on a version of its POWER ISA RISC platform.

- CISC does not mean "able to do complicated things". CISC means "I have a complicated instruction set whose instructions may take several clock cycles to execute and which you may never use".

- I have no idea why you think the inability to emulate another instruction set at full speed rules out an architecture as being viable.

- the Motorola 68K and Itanium are not RISC architectures. 68K is "dead" because it can't run Windows, and Itanium was simply a poor design.

I remember life 25-30 years ago. Nobody in their right mind would have deployed x86 in the enterprise server space, it simply was not done. Every RISC CPU wiped the floor with x86 at the time. They lost because x86 was cheaper and could run Windows, and Intel were eventually able to hotrod their rubbish architecture to make it run fast.

These days, the CISC vs RISC thing does not matter. It was important in the 1980s/90s when chip real estate was at a premium, and RISC could use the space vacated by complex instructions to make simple instructions run much faster. Nowadays, everything including x86 is implemented on a RISC core with the higher level CISC instructions microcoded.

Upgrading Linux with Rust looks like a new challenge. It's one of our oldest

containerizer

Re: Why a new language?

I don't recognise the idea that memory safety only recently became important. People have been banging on about C's limitations in this regard since the language first became available.

It's not like the industry has been frozen in aspic. Outside of specialist fields (kernels/device drivers, low-latency software) C/C++ have been replaced with Java, C-Sharp and Python, and Go is making some inroads in the systems programming side.

The remaining software which continues to be in C is that which cannot be easily migrated to any of these languages. For most cases, the cost of continuing to use C is lower than the cost of replacing codebases with Rust. The workarounds for C - static analysis, stack smashing protection tools etc - are deemed "good enough" most of the time.

I can well understand why some kernel developers might see that all of this is a solution in search of a problem. And the Rust crew aren't the first to make this case - there have been attempts to push C++ on the kernel in past years too, and the same kind of reaction when it was blocked - that the devs are a bunch of luddites trying to hold back advancement. It's not that simple.

If the world really is right, and the kernel devs are wrong, then this problem will solve itself in a different way : people will write a from-scratch, Rust-only kernel which is compatible with Linux at the system call layer and can therefore be dropped in to existing distributions. If that idea seems too far-fetched, then it telling us something about the cost/benefit of using Rust.

containerizer

Re: Why a new language?

The whole point of using C is for the compactness and efficiency of the code. If you start adding stuff like this on it defeats the purpose. And unit testing can't give you what compile time memory tracking gives you.

containerizer

Re: Why a new language?

> In my experience, Rust code is rarely shorter or easier to write than its C counterpart. It takes a lot of discipline to write anything sufficiently worthwhile in it.

This statement is probably not true if the definition of "sufficiently worthwhile" includes a memory safety guarantee.

containerizer

Re: Why a new language?

> Now had the issue of memory safety, etc, in C (or the lack thereof) been addressed by creating a sub-set of C and/or a few features to mitigate some issues like use-after-free then it would have been fairly trivial to do this

"Come on boffins, get of your backsides and make C safe!"

Google says replacing C/C++ in firmware with Rust is easy

containerizer

Re: Wanna give some examples?

> There is no downside to it other than having to learn Rust, some toolchain issues in some embedded environments, and of course shaking your cane at everyone on your lawn.

But two of of the three mentioned are pretty big issues.

C/C++ compilers have had decades of refinement behind them, work everywhere, and loads of people know how to program in it competently. Even if you managed to persuade people of the technical merits of Rust, the inertia is always going to be there.

With respect, saying "some toolchain issues" is a bit flippant. It looks like only x86 and ARM-64 are well supported ("tier 1"). Many embedded platforms won't have that hardware - I'd expect ARM32 to remain popular for a while yet. Other RISC architectures seem to be withering on the vine - MIPS seems to be dead, although I'd expect there's still a lot of PowerPC in telecoms-focussed SoCs. And SPARC/s390x remain small, but important in key enterprise markets.

A lot of that problem would go away if they'd switch the focus to adding a GCC frontend, where almost every other major language and target architecture is supported.

Raspberry Pi 4 bugs throw wrench in the works for Fedora 41

containerizer

Re: WTF ?

> And the reason for that is - HomeAssistant *hates* any other OS

Even Home Assistant OS ?

Red Hat middleware takes a back seat in strategic shuffle

containerizer

Most of this is .. actually quite sensible

Don't want to be the "nothing to see here" guy, and layoffs are never good. But some of this actually makes sense.

RH have a long history in maintaining their own builds of things, for example the JDK or Spring, where they add little in the way of extra capability. This made sense back in the days of yonder where community projects tended to be less concerned with LTS builds. RH added value by maintaining such builds.

OSS projects seem, in my perception, to have become much more mature of late, often maintaining their own LTS builds (corporate sponsorship plays its role here). Inevitably, rolling one's own build therefore accomplishes less.

I think it was a couple of years ago that RH announced that the Temurin JDK build would be fully supported on Openshift, for example. There's a win-win thing here; RH backs, and presumably helps fund, a community-led build, so they don't need to have a separate team themselves. The community gets the sponsorship.

Debian preps ground to drop 32-bit x86 as separate edition

containerizer

Re: Good thing too

I'm sure there are embedded projects crazy enough to (a) use an x86 and (b) use a Debian distro for their platform, but I'm going to guess they're probably few and far between!

containerizer

Re: It's our gift to you this Xmas

it's not the hard leap that's the problem, it's confusing people with newly invented terminology.

Anyway, I'm off to get a life now. Happy Christmas.

containerizer

"x86-32" huh ?

I can find no reference, anywhere, for the term x86-32. Did you guys just make it up ?

The 32-bit variant is pretty much universally known as simply x86, or sometimes ia32.

The 64-bit version has been variously branded amd64, intel 64, x86-64 or x64.

There's also a thing called x32, which was an attempt to have a hybrid. It runs on amd64 but limits itself to 32-bit pointers.

Introducing yet more nomenclature is not at all helpful.

Will anybody save Linux on Itanium? Absolutely not

containerizer

Re: Branch seems resasonable

You don't even need to do any branching. Just use kernel 6.6, which is going to be supported in LTS form for another 3+ years. Then you can branch it.

But of course branching should not be a problem, as these folks will presumably already have branched their own compilers and distributions all of which dropped support for this arch long before the kernel did ..

OpenELA flips Red Hat the bird with public release of Enterprise Linux source

containerizer

Re: Mean while Alma

> There isn't (now?) an arm version of RHEL9.

I doubt there ever will be. I think I read that some distributions have announced deprecation of 32-bit x86, never mind ARM.

containerizer

Re: Does not compute!

> Oracle and SUSE want customers who are willing to pay for support. The $$$$$$$ focus is behind this move.

The problem that Red Hat had, and which this new group will have, is the cost of building the and maintaining the thing used by all the people who are not willing to pay for support.

containerizer

I don't think they will, any more than the Americans were unhappy when the Soviet Union invaded Afghanistan. If this group are serious, it means they're going to burn a lot of cash building a stable enterprise OS that they will then give away to people who completely opposed to having to pay for it. They can't do that forever any more than Red Hat could.

If I were Red Hat, I'd be quite pleased. Go ahead, knock yourselves out burning cash and giving stuff to people who won't pay.

Oracle pours fuel all over Red Hat source code drama

containerizer

When you say "the community" who do you mean ? Taking the Linux kernel, from what I can tell it's pretty much other large corporations - AMD, Intel, Google, alongside IBM/RH. Strip out contributions made by those who are on their employer's clock and what have you left ? I'm not saying this to diminish the core contributions made by volunteers and enthusiasts, but proportionately, most of the work is done by corporate sponsorship.

I'll bet that "the community" who are really concerned with running a CentOS like OS for free are almost exclusively businesses who are annoyed at the idea they might have to pay for something that is of value to them (noting that academia is being caught in the crossfire). I'm sure there are exceptions to this, but why would the sort of enthusiasts who contribute to OSS routinely want to run a boring, enterprise focussed OS that is basically out of date on the day it is released ? I appreciate that for many people there is a principle at stake here, but how many enthusiasts and volunteers are really effected by this ?

Corporations around the world, many of them not directly involved in IT services, are making money off the back of open source contributors all the time. They contribute nothing back to the community - they're not asked to do so - but sell their products and services using open source frameworks. Across the world, businesses use tools such as the Linux OS, databases, editors, compilers and drivers to generate what must be $trillions in revenue without anyone ever complaining that they're making money on the backs of volunteers. Then Red Hat come along, contribute a ton of stuff upstream and ask for payment for their stabilized, boring downstream distribution and suddenly they're worse than Ghengis Kahn.

Go figure, as they say.

containerizer

Re: Opensolaris anyone? @containerizer

wasn't the "cumbersome LPAR system" basically a port of the long-established virtualization tech from their mainframe line ? I can see why it would have made sense if their enterprise customers loved it (which they did/do).

containerizer

Re: Opensolaris anyone?

I disagree. The market killed SPARC, along with MIPS and POWER. I think that leaves IBM as the only vendor selling enterprise IT platforms based off a proprietary architecture (which I understand is basically a heavily modified POWER arch under the hood?). We'll see how long that lasts ..

containerizer

Re: OpenSolaris had a better X11

Hmm. "it worked well with my graphics card" is great and all, but I rather suspect that Linux at that time worked with a far wider range of hardware!

Much as I'd happy to assign blame to Oracle, it was all over before they came along. The Solaris engineering team stiffly resisted licensing the code under the GPL, forcing them to come up with this CDDL daftness. I also don't think it's Oracle's fault that Sun screwed up the x86 side of things. As for X12, I am sure it would have ended up looking something like Wayland; if the issue is community leadership I am not sure Sun's involvement would have improved matters much ..

containerizer

Re: Opensolaris anyone?

Can't dispute the truth of any of that, it's a top notch Enterprise OS. In my first job we used Solaris on SPARC was utterly unbreakable - but expensive. A big part of that, alongside the very sound design, was tight integration between the hardware and the software. But when we got to the point where Linux distributions were being certified on x86 servers, it was all over.

Agree completely with your second paragraph. They could have saved both Solaris and SPARC by supporting x86 as a gateway drug.

containerizer

No, Red Hat taking away customer licenses and support isn't a restriction. They can't stop you redistributing the source that you possess.

containerizer

Re: Opensolaris anyone?

Gave up after a couple of minutes of over-excited yelling making my head hurt. Aware of cool stuff like ZFS and dtrace, but what else is there ?

containerizer

Re: Opensolaris anyone?

Because once Linux became mainstream, all of the commercial UNIXes became redundant. Nothing to do with MBAs; if anything it was the power of open source.

Bosch goes all-in on hydrogen with €2.5B investment by 2026

containerizer

Re: I was expecting

Er, the government have been going pretty hard on backing hydrogen lately.

I don't understand why anyone could think that a hydrogen boiler is better than a heat pump. If the goal is to end dependency on gas and produce clean energy, the only way to produce hydrogen right now is via electrolysis of water. If you've already got electricity available, why not use it directly instead of losing energy by converting it to a dangerous, invisible, explosive gas ?

Rocky Linux details the loopholes that will help its RHEL rebuild live on

containerizer

Re: making them Red Hat customers, at least briefly

That's unambiguously illegal under the GPL. Anyone who gets the binary must be allowed to get the source.

Fedora Project mulls 'privacy preserving' usage telemetry

containerizer

Re: "Fedora Workstation the premier developer platform for cloud software development."

declaration : I'm probably a critical RHEL fanboy, I think they do a lot more good than harm.

What's the prob with Fedora ? Been using it in anger a couple of years now. It's come a hell of a long way since I first tried it - back in the days of Fedora Core 8, or 9 (can't remember, must've been about 12/13 years ago). Just few months back, I installed F37 on my Thinkpad X1. Runs buttery smooth, not a single hitch beyond a little fiddling needed with bluetooth. All of the usual handy dev tools are all pre-installed, or are a dnf install away. It's an excellent platform for software development, noticeably slicker than working with Windows on the same hardware. I was particularly impressed when I did an online upgrade to F38. Not a single beat skipped.

I've messed with Ubuntu - not very recently but a few years back. I think it's intended to make Linux accessible to novices and is fairly successful in that. But it doesn't feel like it is intended for power users; tries to hide too much from you. It's possible that opinion is out of date.

As to the decision to stop maintaining LibreOffice - can't say I noticed. Last time I tried it, it was rubbish. I use Google docs for all my personal stuff - limited, but typically sufficient. For work, MS Office is a staple. Maybe there's a big, active community of LibreOffice users out there I've not heard of ..

Red Hat's open source rot took root when IBM walked in

containerizer

Re: not paying Red Hat for RHEL, but getting the majority of the value of RHEL for free.

"This is where the Red Hat defenders are getting it wrong IMO. The majority of the value of what RH provides exists in terms of professional support. You don't get that using a clone. "

It is clear that the Red Hat distribution has value by itself, otherwise people wouldn't bother cloning it, and you wouldn't have these howls of outrage when they try to stifle the clone builders.

containerizer

This just isn't true. Red Hat is managed as a separate entity. It is controlled by IBM, sure, but it hasn't been absorbed the way other acquisitions (eg Rational) were.

You won't find an IBM logo anywhere on Red Hat's website except where IBM branded products are being integrated or distributed.

containerizer

The IBM bashing that's going on misses out on a few important details from the history.

20 years ago, we all cheered when IBM deployed its formidable legal department to stop a severely misguided but dangerous effort by SCO (with Microsoft skulking in the background) to essentially racketeer users of the Linux kernel, threatened to sue anyone using it without a license. IBM said "not on our watch" and prevailed. IBM had correctly calculated, before Sun, Microsoft and others did, that the Linux kernel would become the standard compute kernel in the enterprise and that it would be better to embrace it rather than try to compete with it. Along with others, they poured resources into it, among other things porting it to their mainframe line.

I don't say this to suggest that any supposed IBM transgressions should be set aside; instead I think IBM's historic support for open source is something that needs to be considered when evaluating their motivations.

The difficult reality as I see it here is that the relationship between commercial interests and volunteer contributors within open source is a symbiotic one. It doesn't always work as it should (hello OpenSSL) but I think it it's a fact that Linux and the suite of applications around it would have nothing like the stability and quality they have without heavy commercial input. At the same time, the commercial interests which benefit from open source would be worse off without the community. I think everyone involves should try to find a pragmatic compromise.

Red Hat's focus on trying to make rebuilding their distribution more difficult did not, as the article notes, start with the IBM acquisition. They bought out CentOS, and then changed the way they released their patches. CentOS is not really the target here - competing vendors are. Consider Oracle : a large, wealthy and highly successful and profitable corporation, taking a distribution built - entirely legally - on someone else's investment and selling it through their own channels, while refusing to support, maintain and open source their own enterprise-class operating system.

I think there is a case to be made here that Red Hat have perhaps mishandled the communication of this. But at the end of the day, the people who are complaining are people who want the stability of a tested and certified Linux distribution without contributing to the considerable costs of that testing and certifying. If you're running a mission critical workload without some sort of support, you're an idiot. If you're smart enough not to need support, then you're a Debian guy and you don't need Red Hat.

Red Hat strikes a crushing blow against RHEL downstreams

containerizer

Re: RHEL is history who cares?

"show me you don't know anyone who uses Linux in a serious production environment without telling me you don't know anyone who uses Linux in a serious production environment"

containerizer

What is there to sue for ? The kernel source is freely available from the community at kernel.org, always has been, always will be.

The GPL (v2) requires that the person who provides a binary based on a "derived work" of a GPL-licensed software must also provide the corresponding source code.