* Posts by bazza

3712 publicly visible posts • joined 23 Apr 2008

Christmas 1984: The last hurrah for 8-bit home computers

bazza Silver badge

Re: "if I couldn't write a mouse driver, I didn't deserve a mouse."

"If I couldn't write a mouse driver, I didn't deserve a mouse".

I took the opposite approach. Change the electronics, instead of writing a device driver.

I came across a track ball surplus from the Royal Navy - a giant yellow plastic ball with considerable inertia, but a really nice and slick action. Mechanically, it was superb, which is why I wanted to use it. BTW the size was due to the need for this to be usable by a crew member wearing bulk anti-flash protective gear.

However, to get it working with a PC I decided to tinker with the electronics inside to make it compatible with something that already existed (an early MS serial mouse if memory serves), rather than write software to make it work as-is. Worked a treat!

The Automattic vs WP Engine WordPress wars are getting really annoying

bazza Silver badge

Re: re: clearly OSS has a free loading problem

Thing is, a lot of folk's interpretations of various OSS licenses have little in common with what the licenses actually say.

For example, those banks and their use of OSS software, with no give back. The only valid expression of the software authors intentions is the license. If that license says "you can use that software for free", there is no other possible reading of it. And whilst many others may assume that the authors are looking for some sort of quid pro quo, some sort of give back, that's not actually what the license requests. It's not for third parties to say what the software authors meant to say instead of what they did actually say in the license they stuck on the front of their code.

One cannot be both obligated and free of obligation at the same time. And, the established reality of the Western world's capitalist democracies is that if a company is not obligated to do something, the shareholders are allowed to get properly grumpy if the company then does burn shareholder money on something they don't have to do. Arguably, releasing software for free into such an environment in the hope that major corporations will shower your project foundation with funds or code donations is somewhat naive. And where it gets very complex is that the companies "free loading" may well be part owned by the pension scheme the software author is a member of...

There are - amazingly - companies that'd rather pay for software (and get support) than use free software. The problem these days is that, in quite a few fields, there is no commercial option; it's OSS or nothing.

There are other business cultures. In Japan companies exist as much to be socially useful as to be profitable. This is why Japanese company execs are on TV bowing deeply in apology when the company screws up; it's a personal social issue for themselves (and the share price crash is a mere secondary consideration).

Fear of Foxconn reportedly driving possible Nissan, Honda and Mitsubishi merger

bazza Silver badge

Re: Not just Foxconn

#3 if you have a spot to park a car in the first place, it’s far from guaranteed to be on your property with options of a hook up to one’s meter.

Plus public transport is so much more dominant there. Replacing all the ICE cars with EV won’t really change all that much.

Microsoft won't let customers opt out of passkey push

bazza Silver badge

Re: That's not a problem with passkeys

The ability to safely transfer security assets such as PassKeys from one device to another seems to me to be an essential component of any security management tool. What's irritated me about things like the popular OTP apps on smartphones is that - basically - you cannot do so. You have to rely on the OS and back end cloud backing up your device and restoring to another. That's not fine, because then you're locked in.

So it seems pointless building PassKeys into an operating system as core functionality, because that's simply going to make life for users extremely difficult at some point in their lives. Unless there is also a standardised means of transferring PassKeys around devices that all can agree to and is also safe, the software vendors should not be allowed to build them in to their OSes. It becomes another form of device lock in.

KeePass and its derivatives is the best for storing security assets, simply because one can then move one's KeePass file from device to PC to Mac, etc.

Linux 6.12 is the new long term supported kernel

bazza Silver badge

The inclusion of PREEMPT_RT is excellent news. I salute all those who have worked to make this happen!

You can do some really cool things with PREEMPT_RT, if you code to make use of it and need something more real time than simply playing back audio, video. It does mean one is likely coding in C or C++, which is becoming less popular. I’ve had excellent results with it, and now that it’s readily mainstream it’s easy to get. It’ll trickle down to Android, and things like ICE guis could become super slick.

It should put a dent into RedHat. They did (do?) a spin of RHEL called MRG, the R being a PREEMPT_RT kernel. They charged a ridiculously large fee for this. Now it’s a free thing.

Fission impossible? Meta wants up to 4GW of American atomic power for AI

bazza Silver badge

Re: "They just don't happen to be used in the US, "

CANDU's use of heavy water was (is?) a bit of a bonus for the UK's Rutherford Appleton Lab's ISIS particle accelerator. Heavy water being disposed of by the Canadians was really good for target cooling in the ISIS particle accelerator.

Trouble was, after a tour in a CANDU reactor, the heavy water would be lousy with tritium (T2O, or DTO, or THO). So, the ISIS facility had to install a load of Tritium detectors in case of leaks (you really don't want to ingest Tritium), and because (in theory) leaks didn't happen these were on something of a hair trigger.

The trouble was that, at the time, next door, there was the UKAEA and its reactors, and because they had reactors of a particular sort they were prone to guffing off vast clouds of tritium such as you'd never believe (and were licensed to do so). And it'd waft over the fence, blow in through gaps in windows, etc. and set of the tritium detectors in ISIS. Every single time there'd have to be a check, search, etc in case, but of course nothing was ever found to be amiss.

This was way back in - I guess - the late 1980s, very early 1990s. I don't know if they use heavy water today in ISIS; they're not using uranium targets like they used to, so perhaps the D2O has gone too.

bazza Silver badge

Is it just me, or does anyone else think that the expenditure of 4GW on "AI" (for any purpose) is a complete and utter ****ing waste of resources?

To put that into context, so far as I can tell a ton of H2O produced from seawater using reverse osmosis requires 2.98kWh (see this presentation slide 19. 4GWh could produce 1.3million tons of fresh water, every hour. That would be quite a lot of irrigation water for farmers, or an awful lof of clean water for houses.

Fresh releases of Xfce, Mint, Cinnamon desktops out in time for the holidays

bazza Silver badge

They're getting harder to avoid, as are Fedora's equivalent.

The fragmentation in Linux of "How to Distribute Software" is the biggest barrier to adoption outside the limited world of enthusiasts / experts. Windows and Mac just sorted it out, once, and haven't had to think about it in years and years and years.

Both KDE and GNOME to offer official distros

bazza Silver badge

Re: RedHat Dominance

Possibly. Though I'm not entirely convinced that they won't make SystemD and Gnome co-dependent... That might sound absurd, but then there's a lot that is absurd around already...

bazza Silver badge

RedHat Dominance

This sounds like the beginning of the end of other distros having competitive use of Gnome. If RedHat can make GnomeOS = Fedora, and start making it messy for other distros to incorporate Gnome, then they start hoovering up what few desktop users there are out there.

'Alarming' security bugs lay low in Linux's needrestart utility for 10 years

bazza Silver badge

Ah, the perils of having security critical code written in a scripting language. This is not the interpreter you were looking for, enjoy the experience!

bazza Silver badge

Re: Nothing valuable appears to be lost doing this.

Most of the times I’ve updated my Ubuntu server installations there’s been a fresh shiny kernel to reboot into anyway, and a manual reboot is required anyway. That does rather negate the value of having a tool that tries to work out whether anything else needs a reboot.

Rust haters, unite! Fil-C aims to Make C Great Again

bazza Silver badge

The non-portability of assembler is certainly a severe down-tick for it, for large projects!

Portability of Optimisation

One aspect in the Asm-C debate was "you can never write a compiler that produces code as optimised as a skilled asm programmer can". Of course, CPUs (their pipelines, caches, etc) became so complex that optimising by hand became really hard, and C compilers could cope with that and - for most purposes - won. The only place where compilers failed was on Itanium, but that's not their fault!

There is still a place for hand written asm in, say, DSP applications; the best fft() I ever came across was handwritten asm for PowerPC/Altivec, proprietary, crafted by a company that had a collection of devs who really, really understood PowerPC, its pipelines and caches. It was 30% quicker than the next best thing (FFTW). It gave the company (who also did hardware) an enormous advantage in the market because applications built using their math library required 30% of the hardware compared to competitors. When you're selling into the military aviation market, 30% reduction in hardware requirement for an application is worth $billions.

Future Aspects of Functional Portability, if Hardware Changes Radically

C and Rust are on a par regarding portability, for the moment. As I've highlighted in other posts, Rust's "knowledge" of data ownership does mean that it could in theory be evolved to a point where function calls / threads are automatically mapped on to Communicating Sequential Processes processes (akin to Go's go routines). So if one did build a Rust compiler for CSP hardware (i.e. not today's SMP hardware, riddled with Meltdown / Spectre bugs), one's Rust source code could be automatically compiled for it and benefit from it. That'd never be possible with C.

This possibility in Rust is hinted at in a blog post. Under the hood, Rust could build code for non-SMP hardware just as easily as it can for SMP hardware. It doesn't at the moment because there's (effectively) no such hardware, but there's no theoretical reason why not.

Actually, it's not quite true that there's no such hardware. Super Computer clusters are not SMP environments.

bazza Silver badge

Re: 1.5x slower....

Yep, and there's some CS types called for the abandonment of SMP. Attempting to make a "faked" SMP environment on today's chips is what's lead to such silicon flaws.

Rust is interesting because - with it's total knowledge of what is accessing what memory - is very well suited to CSP environments. If implemented in hardware (remember Transputers?) are far less likely to have flaws like Meltdown and Spectre. With Rust, passing data from function to function whilst having knowledge of ownership could be used to transfer data over CSP channels from CSP node to CSP node. I don't know if Rust actually does that at present, but it could. I know there's CSP implementations for Rust, just like Go.

So Rust might be a half-way house between SMP code and CSP code, in that it "feels" like you're writing for SMP, but actually underneath it could all be running on CSP hardware, and you'd never know the difference.

bazza Silver badge

Which indeed is Rust, or another.

Thing is, C-like languages - to be rendered "safe", or at least "known behaviour" - require quite extensive runtimes, particularly if you've got separate threads sharing data. And that's how you end up with things like C#, Java, etc. I've not looked at either TopC or Fil-C, but I find it difficult to believe that they achieve complete memory safety when there's threads involved (unless they too have a heavy weight see-everything runtime). Taking away 90% of mistakes is a start, but I fear that the remaining 10% would be the really tough bugs to find, and also the most important ones to find.

I can see another advantage. C to Rust conversion is said to be tricky, I suspect because first you have to fix all the mistakes in the C/C++ before it's in a Rust-like shape. C/C++ compilers that - at run time - can give you error output akin to "ah well, you've got this wrong here" could serve a useful purpose in allowing code to be bashed into better shape in a language which is familiar, rather than doing so whilst also translating to a language that is not.

General Dilemna

Rust has certainly thrown in a curve ball. For the first time in decades, there is a truly plausible and arguably superior alternative to C / C++. Unless Rust actually dies, the arguments against migrating to it are going to get thinner and thinner. For large C/C++ projects, it's a horrid choice between sticking with what's known for an easy life and risk project death (as C/C++ gets abandoned) but hoping for the best, or bighting the bullet now and cracking on with converting, or (even harder) both.

This was always going to happen at some point of other in the history of computing. Indeed, it's already happened; once, there were assembler programmers convinced that assembler would live on forever in, say, OS development, hot apps. C soon wiped that out. If we're to draw a lesson from history at all, it's that C / C++ is going to lose and probably faster than anyone thinks.

bazza Silver badge

Or just try another OS / C allocator pairing.

I've seen code (GNU Radio) run just fine on Linux, but the same thing compiled and run on FreeBSD segfaulted with a memory leak. No one was ever going to find that bug on Linux...

Ideally, OSes / libc's would offer two modes; hyper-optimised, minimum OS interaction, fastest possible allocation, or super-defensive, lots of space between allocations, all freed memory immediately unmapped and returned to the OS.

Airbus A380 flew for 300 hours with metre-long tool left inside engine

bazza Silver badge

Yep, and it's very simple concept. You count the tools out, reason says you should have the same number back when you're done. Tools do not evaporate, or melt, or get spirited away by the little tool pixies.

It's absolutely horrifying that Qantas fooled themselves into acting as they did. Clearly they considered the "ah, Barney must've taken it home" scenario type to be far more likely than the "we've left it somewhere where it oughtn't be" scenario. If they were prepared to consider that it was possible for a tool to go missing in a "safe way", then their whole mindset was wrong. That in turn means that in fact they never had a tool counting culture in the first place - not a real one - and that they have been operating dangerously for a long time. This time, it actually went wrong, and they're just lucky it wasn't a disaster.

If one has a safety measure - such as tool counting - you have to exercise the safety measure. What they need is an inspector who - during the course of work - nicks a tool and then times how long it is until the maintenance crew notice. If they change the work shift without noticing, fail. Depending on the tool, if they don't notice within a few minutes - fail.

bazza Silver badge

Re: "Qantas personnel in Sydney even requested removal of the report"

Shows how easily the pressures of conducting business can override safety.

That aircraft should not have been allowed to move 1 inch until that tool had been accounted for. That it did and flew 34 cycles speaks volumes about the motivations in play.

It also speaks volumes about the thoroughness of "walk arounds". That location where the tool was found is visible between the fan blades, if one makes the effort to look. I doubt they took the fan off to take this photograph! If one enlarges it one can clearly see that we're looking through the engine and out the other end of the bypass duct, with the tool stuck at the bottom of it. It means that, in the walk arounds before each one of those 34 flights, no one really looked "in" the engines from the front. Most likely they simply looked "at" the engines, which to be frank is ****ing pointless. I also think that it'd have been visible from the back end of the engine too, had anyone looked forwards through the bypass duct.

Granted, the engines on an A380 are pretty high up and you'd need a ladder to have a proper look. However, considering there was a tool missing you'd think that someone would have made the effort.

Qantas likes to pride itself on never having had a crash. Well, they came far too close to having one on this.

Fortunately, the risk of this being ingested by the engine core (where it could cause real damage) was pretty low. The flow of air through there tends to move things outwards. To have reached the engine core, the tool would have to have migrated inwards against the flow of air. One of the issues facing engine designers is actually getting air to flow into the engine core in the first place, when it doesn't want to. It's largely why the fan blades themselves have a twist - starting nearly straight at the root to let the air reach the inlet for the core itself, and twisting further out to have a shape that'll actually generate thrust.

What would have happened eventually is that the tool would have disintegrated, probably on take off (peak thrust), blowing lumps of nylon all over the runway, probably without harming the A380 in any noticeable way (unless there are any delicate sensors jutting out into the bypass duct further back). However, the next aircraft to take off could be hitting that debris, maybe suffer a bunch of tyre blow outs, and have itself a take off crash. That is the risk I think Qantas ran.

The US government wants developers to stop using C and C++

bazza Silver badge

Re: Why?

>Can someone give me a summary of just what about Rust is causing the translation problems?

I gather that one issue is that it won't let you get away with things that are commonly done in C/C++ but are actually quite dangerous. It think this can mean that there's a lot of re-work to do before the code is translatable.

bazza Silver badge

Re: Why?

There is no doubt some element of that. The difficulty this time is that Rust might not go away in five or six years. It's certainly the strongest challenger there's ever been.

I think the strength of Rust is causing a lot of understandable angst. If one has decades of C/C++ behind one's career - like I do - Rust is threatening to reduce us all back to square one. That is threatening, as one can no longer say that one has 20+ years more experience than, say, a kid fresh out of college. However, it's largely a matter of attitude. Learning it is achievable, and if achieved, one is ready to go. And, there's an awful lot more to programming experience than just "what language one talks".

We have been here before. Once, there were a lot of assembler programmers convinced they'd got jobs for life. That didn't last...

For projects, it's a real problem. Staying in C, now, could mean having a dead project in 5 to 6 years if the world decides to go for Rust. Yet, converting to Rust now might be a wasted effort. Some cautious transitioning (given that it's achievable) is probably a good idea, just to get a toe-hold in the Rust world. Linus seems to think along those lines, though not everyone in the LKP seems convinced.

I think that the real indicator would be if Rust got standardised. Once there is ISO Rust (or, similar), then it's something that becomes very hard to say is temporary. In fact, I'd not wait for it to become standardised; I'd start when it became clear that ISO were starting to standardise it.

bazza Silver badge

Re: Why?

If it comes to that, neither C nor C++ support dynamic linking. Dynamic linking is really an OS thing, and whether or not there are library functions related to tapping that OS functionality.

There's another way of doing inheritance. Rust - thanks to its strong knowledge of ownership - is an ideal language in which to implement CSP (like, Golang has also implemented CSP). And, there are CSP implementations for Rust. CSP is interesting because it opens up a whole different view of Object Orientated Programming, Encapsulation, Inheritance, etc.

Encapsulation is a CSP process with its own internal data. That data is not directly accessible to another CSP process (like it would be from another thread in the same process in an SMP environment). A process is an opaque "Object" with interfaces (channels). A method is invoked by sending messages to the process via a CSP channel, causing the process to do something. A result may be returned down another channel. Inheritance is easy; a process incorporates another process, and passes any messages it receives that it doesn't recognise down into that inner process. Multiple inheritance is easy - a process incorporates two or more other processes.

This way of looking at CSP is quite useful for helping those with a long and deep experience in object orientated programming to switch over to CSP. It's also very useful in that - a collection of CSP processes instead of a collection of C++ objects - the deployment of those processes suddenly becomes very "agile". Go and Rust do not extend CSP channels over network connections, but it would be very nice if they did (and, there's already enough in the language to allow them to work that our for themselves if that's what they wanted to do). Erlang does (I think).

A closely related idea - Actor model programming - is more easily network distributed. Libraries such as ZeroMQ make that veeeery easy indeed.

The interesting thing to consider is whether - in this day and age - ideas like CSP / Actor model are any slower than conventional code written for SMP environment (i.e. memory shared between processes / threads, guarded by locks, etc). In today's multi-core multi-CPU machines, data is not shared; to be "shared", it's got to be copied into all the relevant caches, and there's a lot of cache-coherency traffic between cores to make sure they all have the same "version" of the data. Whereas with CSP / Actor model programming, you just copy it from A to B. There is no cache-coherency to guarantee. So down at the microelectronic level, there can be precious little difference. There's quite a few computer scientists these days who think we should go the whole hog and abandon SMP, simply to get rid of defects like MELTDOWN and SPECTRE forever.

C++ can never be provably safer that Rust. They'd have to depracate most of the language to achieve that. It could equal Rust, but I can't see how it could ever be safer (unless it becomes Rust without the unsafe keyworkd). As others have said, Rust does support dynamic linking. It would be nice if there were an ISO Rust, but it's

bazza Silver badge

Re: Stop with the useless A better than B crap

I remember reading an article in a journal from the UK's IEE (now, IET), back in the days when the IEE really was a hotbed of engineering study and standards setting. The article was the results of a study and analysis of real safety critical systems in actual use. This was a very long time ago - early 1990s I think - and I'm afraid I can't remember the title or any reference, and it's pre-Internet.

The systems were mixed - air traffic control, nuclear reactor control, flight control, etc. All stuff that mattered. A variety of languages had been used - C, Ada, even Fortran I think. All had cost a large amount of money, and all had good operational reputation. The systems were studied for two categories - symantic errors per 1000 lines of code (ie.code that was compiled, but wrong against the specification), and operational errors per 1000 hours of operation.

They were all pretty good, but none were perfect. The interesting thing was that the very best was an air traffic control system, written by IBM, in C. The worst was some system written in Ada (I can't remember what it was).

The analysis considered this, and determined that for the IBM ATC system. IBM had rolled out its very best, most experienced A-Team developers, and the dangerous-chainsaw view of C was referenced (if you have a chainsaw with zero safety features, blatantly dangerous to use, you may very well take a lot of care in using it!). The Ada did less well, because they'd had a less experienced team. The IBM ATC system was (I think) also the most expensive...

Modern World

One of the challenges these days is getting anyone to do any coding in anything like a rigourous manner. For a lot of systems that get built these days, there's little expectation of producing a high quality result, and the end users have little faith in getting one. Bugs / system cruddiness has become far too "normal".

bazza Silver badge

Re: Which language do you think is used to implement all those memory-safe languages?

>.NET has been written in C# for some years now.

>Prior to that Delphi was written in Delphi from the start.

>(mentioned becuse it had the same creator as C#)

>And Go is written in Go.

Er, apologies for the pedantry, but if I take even the merest of casual glances around https://github.com/dotnet/runtime, I seem to find an awful lot of C and C++ files. There is C# code there too, but it's clearly a long long way from being entirely written in C#. I gather that the C# compiler is written in C#, but the .NET runtime is (at least in large parts) not. But, more on that later.

However, I firmly agree in general terms. Once a language and/or environment is boostrapped into itself, you're up and running. One might have got to a running Go compiler by writing one in C++ first, but indeed, so what? That's not the compiler that matters, its the one that's in use (the one written in Go) that matters.

Runtimes

Runtimes are interesting, because of what they have to do. For example, a C program cannot print anything on a terminal without using functions in (for example) glibc, and those functions work only because they've been written to load up the appropriate registers with the appropriate data and generate an interrupt to initiate a kernel / system call. In theory you don't need glibc for C, because if one really wanted to your own program could make the system call itself.

Such things are not options for all languages. There's many languages that have no concept of registers and interrupts within them, and don't include any means of inlining assembler to do it either. So, one cannot write their runtimes (that have to interact with an operating system) purely in that language; there has to be a language to native transition somewhere within the runtime. Presently, that means having to use C, C++ or assembler (or other language that lets one mess around with actual registers and interrupts) for the runtime.

Rust is interesting because, whilst it shares much of the look and feel of a high level language, it is possible in Rust to make operating system calls directly from within Rust. So, all of Rust (including its runtime) can be written in Rust, with no trace of C / C++ at all (unless one starts considering the implementation of the syscall inside the OS itself!). So, one could perhaps "improve" languages / platforms like C#, Java, by re-writing their runtimes in Rust.

Which, if done, would eliminate the "Which language do you think is used to implement all those memory safe languages?" question altogether.

Has anyone done a Python implementation in Rust yet?!?!?!? Oh yes!

System Calls from High Level Languages?

C# is interesting, because of C++/CLI. I've often used C++/CLI as a means to integrate calls to native code into .NET, because it makes it so trivial to do so. C# can do it, but there's a lot of marshalling, and pinvokeing going on. C++/CLI, you just call it (and, I guess, it's hidden all the marshalling and pinvoking from you).

What I've not explored is whether or not one could make a system call in C++/CLI. One doesn't need to (or indeed, know how to) on Windows (that's what Win32.dll is there for), and I don't know if C++/CLI can run in .NET Core on Linux (I think it might be from .NET Core 3.1). However, my point is that there is a .NET language that can (or very nearly can) make system calls directly from within the language. That suggests that without too much extension or difficulty, a high level language like C# could be given the means to make system calls, and then both it and its runtime could be written entirely in the language itself.

bazza Silver badge

Re: It's not the language, it's just the way it's "talking"

>I do however like the idea that F-35s etc may in future be powered by Rust, if they're not already.

Not yet - Rust is too new. Parts of F-35 rely on INTEGRITY - the OS from Greenhills, and that's written in C (using Greenhill's own C/C++ compiler and C/C++ runtime libraries). If you want compilers, libraries and an OS that has strong assurances of being correct - look no further.

But it does smell like the US Gov (which will include the DoD) will start getting quite insistant on Rust being used, and I can see why.

What Else?

Rust is one component of it. Adopt it, forbid the "unsafe" keyword, and in theory you end up with code far less prone to memory mis-use errors.

However, when one looks at today's hardware, MELTDOWN / SPECTRE and similar are all about memory misuse / mishandling within CPUs. And it's interesting to consider what can be done about that. There have been articles here on El Reg on the topic of the need to get rid of C in the hardware sense too. C / C++ and today's libraries for them all assume that its running on a Symmetric Multi Processing hardware environment (for multicore hardware). But, the hardware hasn't actually looked like that for decades; SMP is a synthetic hardware environment built on top of things like QPI, or HyperTransport (or newer equivalents), and these cache-coherency networks are what is causing MELTDOWN / SPECTRE faults which the CPU designers are seemingly powerless to fix. Apple's own silicon has recently been found to have such faults - they're unfixable in M1, M2, and they've not disabled the miscreant feature in M3 even though they can.

So, it looks like we should be getting rid of SMP. That would leave us with - NUMA.

We've had such systems before - Transputers are one such example. The Cell processor in PS3 was a bit NUMAry also (in how one used the SPEs). Super Computer clusters are like this too (no direct addressability of data held on another node). Various researchers are getting re-enthused about such architectures, pointing out that even 7 year old kids can be taught how to program for them.

Of course, such a hardware switch devastates existing SMP-centric code bases, like Linux.

What Does This Have to do with Rust? I hear you ask? Well, Rust has (in theory) perfect knowledge of what owns what data and when. You pass ownership of a piece of data from one thread to another, and it knows you've done this. An object cannot be mutable in two places at once in Rust. It is almost completely ideal for conversion from running on an SMP environment to running on a purely NUMA environment. Whereas passing ownership at present is simply used to determine what code can use some memory, it could also serve to prod the runtime that data needs to be moved from from NUMA node to another.

In otherwords, Rust is a pretty good candidate as a language that suits both SMP and NUMA architectures.

Golang is another - in fact, Golang makes no bones about being a recreation of the the Transputer's CSP architecture. GoLang is quite hilarious / ironic in the sense that it implements CSP, and has to do so in an faked SMP hardware environment, where most of today's hardware has more in common with the Transputers that with actual SMP hardware of the 1980s / 1990s.

Python multiprocessing is another. Copy data from process to process - don't share it.

This then opens up the possibility that the US Gov - having "forced" Rust on to the software world, got a Rust OS - might then start requiring hardware architectures to drop SMP too.

The Future

That hardware shift is some way off, and a bit of a long shot. However, if it does come, persisting with C / C++ code bases for whatever reason could become an even bigger liability in the future than anyone is thinking of at the moment. Not only might it become hard to find developers willing to write in it, or customers willing to accept code written in it, it may become difficult to find hardware to run it on.

That ought to worry the likes of Linux more than it appears to be doing so.

To be certain that today's SMP environments will survive and will be able to keep running C/C++, these projects need the hardware manufacturers to fix cache coherency / hardware memory faults for once and for all. Though there seems little prospect of that.

Shared Memory is, Today, no Different to Copied Memory

The classic "don't copy data, send a pointer to data if you want it to be fast" is maxim that should have died decades ago. It was only ever true in actual SMP environments like Intel's NetBurst of the 1990s.

Today, for one core to access data in memory attached to a different core, pretty much the same microelectronic transactions have to take place as would be required to simply copy the data. If code on Core 0 accesses an address actually in memory attached to Core 1, then a copy of that data somehow has to find its way into Core 0's L1 cache before the code can actually do anything with it. But, that is a copy. The problem today is that it's a copy that - because this is an SMP environment and Core 1 has also (probably) accessed that address recently, there has to be a lot of transaction between the Core 0 memory subsystem and Core 1's, to make sure that all the caches everywhere all have the current content of that address.

If you look at any Intel CPU today, you may end up with copies of the data in CPU0 / Core0's L1 and L2 caches, and in CPU1 / Core2's L1 and L2 caches, and probably also in one of the L3 caches somewhere as well as in some DDR5 memory chips. That's six copies of the data, all of which need to be kept in sync with each other.

Just think how much easier the hardware would be, if such sharing were entirely up to the software to resolve, and how sweet that'd be if it was all resolved because the programming language's syntax made it very clear where the data needed to physically be?

That's how important Rust could end up being, giving us a bridge from yesteryear's outdated and difficult hardware architectures to tomorrow's.

bazza Silver badge

Re: It's not the language, it's just the way it's "talking"

Just because the means to allocate memory is a function in a library does not mean that the language itself is memory safe. You can declare a pointer, assign some random address to it, and dereference it, all without any external libraries. The code may even work - sorta - on some platforms - depending - but quite a lot of the time it's either going to crash, of cause havoc, or both.

Such code would never pass any resaonable programmer's definition of "memory safe code". C is not an inherently "safe" language. Anyone claiming that it is has a rather too blinkered view of the language.

C++ is even worse, as new is a keyword and part of the language from day one.

bazza Silver badge

Re: No, of course I've no idea if this remotely resembles the actual syntax used...

The Ariane 5 first launch failure was primarily a management failure. The developers had made a mistake that was assessed and accepted for Ariane 4, which was very successful. Management didn’t allow a reassessment of this for Ariane 5!

bazza Silver badge

Re: What is old is new again

One of the problems with Ada was that it featured parallelism (threads) in an era when most OSes had no such concept. It was only when Greenhills did an Ada compiler that targeted VxWorks and used OS threads (tasks in that OS) for Ada threads that Ada became a practical (for some measure of practical) language.

Prior to then Ada compilers had had to implement their own mini OS / scheduler runtime to be parcelled up with the program. And that didn’t work very well. In that era I worked for a bunch who’d bet a whole project on Ada but had so much of it that the program couldn’t be compiled…

Thanks, Linus. Torvalds patch improves Linux performance by 2.6%

bazza Silver badge
Pint

Given that that 2.6% gets rolled out across a vast number of machines across the planet, he's probably personally responsible for a few power stations getting switched off (in aggregate!).

No-Nvidias networking club convenes in search of open GPU interconnect

bazza Silver badge

Interconnect Wars

Ah, a return to the good old interconnect wars. How refreshing.

The logical conclusion is preordained. It’s now very expensive to develop a new interconnection technology, more or less on a par with a whole new CPU design + matching new silicon fab. It’s now likely $10billion plus, and no one can afford that unless there is a mass market to sell to.

What this means is, ultimately, something called Ethernet will win. It may be very different tech to today’s Ethernet, but to thrive commercially it’ll have to fill the same use cases as Ethernet does. And as Optical Ethernet is already touching 400Gbit, and certain specialised applications of 100Gbit are already running in none-IP based memory to memory multilane set ups in open standards systems with VITA in their names, I’d say that this new consortium has got its work cut out.

Boeing launches funding round to stave off credit downgrade

bazza Silver badge

Re: Two bald men sharing a wig

The CFM RISE is a fairly significant compromise. It "works", largely because of the weight reduction in not having a fan cowling, blade-off containment ring, etc. It doesn't work in the sense that the loses at the blade tip are notable.

Thing is, if you can get regulatory approval for an open fan, you could get regulatory approval for a conventional cowled fan that omits the heavy blade-off containment ring. Consequently, as a much lighter weight structure, a cowl can still provide the optmised tip aerodynamics of a closed fan, without much weight penalty. Likely that conventional architecture - much lightened - still wins. Throw in other advantages - like keeping delicate blades away from luggage trucks.

bazza Silver badge

Gets a bit tricky if POTUS ever boarded a RAF flight...

bazza Silver badge

Re: Funding round ? For Boeing ?

It's not in the US Gov's gift to save Boeing, except by fully acquiring, owning and running the company for itself.

A "saved" Boeing to most people's way of thinking is the company restored to both engineering and financial health, operating as a business on the world market. However, unless they get a whole lot better very soon, the world market is going to have moved away from Boeing. Both suppliers and airlines are desperately trying to do so already, in effect pleading with Airbus to make more aircraft than they already are. The suppliers and airlines simply cannot afford to not to deliver parts or take delivery of completed aircraft. They have to seek alternative supply. Airbus is reportedly slowly caving in to demand, but if it sets up another A320 FAL that's it.

And in case one thinks that that is a far off distant possibility, think again. During this strike, Boeing's suppliers have already been asking Airbus for work. All it takes is for one of those - a critical one - to decide that they've had enough of working with Boeing and have decided to accept regular work and paid invoices from Airbus instead, and Boeing comes back from this strike finding that it can't complete aircraft any longer. Then what?

And, Embraer too has been joining in, talking to airlines about a 737 replacement of their own. They're more than capable of building it, and if they got a move on they could probably start delivery inside 5, 6 years. If Boeing isn't fixed well before then, Boeing could find that they're getting a lot of cancellations, orders going to Embraer instead.

And then there's the FAA. Boeing's order book is largely meaningless, unless the FAA's reputation abroad is trusted by its peer regulators. Boeing cannot sell aircraft outside of the USA unless the likes of EASA, CAA, CAAC, etc. are content with the FAA's oversight. But we have Elon Musk seemingly challenging the FAA's power to impose fines in the US Supreme Court. If he wins, the FAA could be seen to no longer be an effective regulator of US aerospace in the eyes of overseas regulators. If that happens, Boeing are toast.

In short, there are many reasons why there may not be a Boeing left for the US government to save. If they were going to intervene, the ideal time was 20 years ago, except they denuded the FAA of resources and made it hard for the regulator to control the company. Now, or any time in the next year or so, is probably their last chance to influence the company in a way that leaves the company as an commercial business.

There's also the geopolitics; the US Gov simply handing Boeing $60billion to clear their debt would amount to the largest ever subsidy. That's not going to look very good on the world stage, having spent years complaining about Airbus's loans from European governments.

One also has to look at it in other ways. Boeing is not popular in DoD - quality has impacted them badly too. Overall, the US gov is probably more interested in maintaining capability in the USA than in maintaining a specific company. That could be, Airbus grows in Alabama, becomes a bit more American.

bazza Silver badge

Re: Funding round ? For Boeing ?

They're not too big to fail. Arguably, they already have failed in that they're not a profitable, healthy company operating in a way that is useful to the US gov and beneficial to the US economy. They're dragging lots of other companies down with them whilst they fail too - suppliers, airlines, all over the world.

And the way things are going, there won't be anything left to save.

bazza Silver badge

Yep. Only a few years ago they were busily engaging in stock buy backs. The irony...

It's a text book example of a management mortagaging the future to enrich the shareholders today including themselves.

Boffins explore cell signals as potential GPS alternative

bazza Silver badge

Agreed. This is not a robust resilient alternative to GPS. As you point out, it cannot be unless they re-work how cell stations do frequency control.

It's a non-trivial problem though. Whilst one could fit the base stations with high quality low phase noise reference oscillators, that in itself is not a long term solution. It would tide the cell network over short GPS outages, but after a while the cells would have diverged from an agreed timing and the network stops operating. They need that common external reference to keep the whole network in sync, which is why they picked up on GPS in the first place. Using the cell network as a back up to GPS positioning doesn't work, because anything likely to take down GPS in a serious way is going to mean no GPS services for years. There's no way that even the finest low phase noise oscillators in base stations could keep the networks in sync for that time.

There are alternatives. Radio clocks - e.g. Rugby / MSF. Run off the UK's atomic clock resources, it's always right, but I don't know if one can sync an oscillator to it sufficiently accurately for the purposes of a cell network; the signalling method for radio clocks is something like 60kHz, and a very accurate 60kHz, but the time of day at the receiver is good to only 1millisecond (GPS receivers can do far better).

Other technologies are also likely limited in a similar way, e.g. eLoran. That too can carry a timing signal, but is also a low frequency narrow bandwidth signal (which is what limits the timing resolution that can be achieved). That's not a reason to not (re)build eLoran, as that in itself offers a pretty good location service. It'd be far better to put resource into (re)building the eLoran transmitter networks than it would into bodging up a location receiver using cell networks.

Better Long Term Solution

Probably what they will have to do is to redesign the cell networks so that they can distribute time themselves. It would require all the cell stations to be able to hear at least one other cell station in the network, but if that were arranged then they could build a self synchronising network. That fits a number of things nicely

First, there's pressure from the UK gov for companies to share cell sites, stop the mad dash for prime sites amongst the industry players. The ultimate conclusion is that there's one single physical cell network, with all the current "operators" becoming virtual networks on it just as the smaller providers like Virgin and Giff Gaff are today. Merged into one physical network, it's more likely that all the cell base stations are within reach of at least one other cell base station.

Secondly, resiliently self-synced like this, then the cell network does indeed become a solid, reliable, multiply redundant source of time and position services. In fact, they could re-engineer the lower layers of the 5G stack specifically to provide time and position services, rather than such services having to be synthesised (badly) from cell emissions as signals of opportunity.

Won't Happen unless kicked

Thing is, this won't happen spontaneously. A self syncing network is going to cost operators money in one way or other, either through the use of bandwidth for synchronisation, added base stations to provide the synchronisation grid, etc. To make this happen, several major governments around the world are going to have to intervene in the market and pass laws requiring that such things are required by the licensing regulations for cell networks. That's going to take political cooperation between some pretty major governments currently holding adversarial positions against one another...

The Western world could go it alone in this regard. It's going to cost money, and it'd drive a wedge into the existing global standardisation process. That might fit various tech repatriation agendas some governments have.

Apple quietly admits 8GB isn't enough in 2024, M4 iMac to ship with 16GB as standard

bazza Silver badge

8GB wasn't enough when they first launched their own silicon, years ago. 16GB is an absolute bare minimum.

Packaging the RAM with the CPU and GPU is one way to make things trot along quickly, but it's clearly far too limiting. I'm sure the only reason why they offered 8GB in the first place was that they've had to use some pretty fancy RAM modules to get 8GB on the package in the first place. And as RAM hasn't really progressed in development like CPUs have, they're likely doomed to find their silicon falling away in terms of achievable RAM capacity. For example, if I want to, I can put 128GB into this several years old laptop if I so wished. So far as I'm aware, Apple no longer has any machine that can be fitted with that much memory, and likely needs to resort back to memory SIMMs (or soldered in equivalent) to do so. That in turn would increase their power consumption, thermal challenge, cache design/sizes, etc.

UK sleep experts say it's time to kill daylight saving for good

bazza Silver badge

>Stuff like CCTV I don't even bother with the changes.

You have to be quite careful about timestamps on CCTV, if you want to be able to gain anything (by way of admissible evidence) from having the CCTV. If you cannot show that the time stamp on the recording is accurate, it's difficult to persuade a court to allow it to form part of the evidence in a court case. The reason is a crime is committed by a person in a place at a time. If the "time" can be cast into doubt, it's difficult to say that the defendent's alibi is false. Especially if the alibi is backed up by evidence that is correctly timestamped (e.g. cctv from a pub).

Getting this right can be as simple as ensuring that it has a connection to an NTP server, updates, and that setting the time is logged. If you have a log file indicating that the time is being set according to reputable NTP servers, it's pretty difficult to dispute your cctv coveage.

bazza Silver badge

International Atomic Time

Really, all IT, technology, software should be working from the TAI (International Atomic Time - with a French accent). That follows the time rules that are commonly used in software (365 days a year, 1 day extra in leap years, no leap seconds, 7 days a week), and is aligned to what GMT at some point back in the 1970s.

All representational time - i.e. "computer" time presented to a human for consumption, should be converted (if required) into the local time zone, and if it really matters converted with the appropriate number of leap seconds.

This solution would make computer time far simpler to handle in software, far more reliable for records keeping, the lot. Instead, we have a mishmash of what "time" is in software, with most being mere approximations to UTC, and most dealing badly with leap seconds. The mishmash came to a head a couple of years ago when it was realised that the world's "bodges" for dealing with things like leapseconds had finally gone irretrievably wrong, as the IERS announced its intention for an anti-leapsecond. Yes, the earth's rotation has got faster.

This has been a clumsy and ill-handled issue in the field of operating systems and software development for decades, and it's faintly ridiculous that it continues to be so.

Linus Torvalds affirms expulsion of Russian maintainers

bazza Silver badge

Re: Free Software should be neutral

Software doesn’t have a personality to want to do anything. It depends on what the project owners want.

The difficult choices come when a project decides that it cares more than trivially so about the motivations of developers.

It’s bad enough with just well-meaning developers, and there are vast suites of software to give control over what developers do and remove the need to trust (their abilities) absolutely. This software is called git and the associated services and tools. So OSS is already comfortable with the notion that one cannot absolutely trust all contributors to never make errors, and the need for strong controls being available to project owners.

Not trusting contributors’ motivations is also normal. There’s tons of ordinary criminal hackers out there to ward off, and no one thinks such warding off is a bad thing.

Obviously the idea that, because something is good and free it would always be universally venerated and left untainted by absolutely everyone, is very naive. Geopolitics is simply another unavoidable aspect of that.

bazza Silver badge

Re: How many western governments use Linux?

Quite possibly.

It's highly unfortunate that OSS is getting caught up in the geopolitics, but it was probably always inevitable. A really successful OSS project is always going to be one that's found widespread use - economically important use. And, whilst there's trouble in the world, it's economic role is always going to make it of great interest to good and bad actors.

Or to put it more bluntly, there's nowt that catches crap as much as a brand new hat. It's pretty much true of any good thing.

It's difficult things like that that I really wished universities taught, or taught more thorougly. For engineers, developers, etc to successfully find their way through the world, they need to be made thoroughly aware of the crap that'll come their way and how to deal with it (law, money, geopolitics, patent laws and their rights under these laws, etc). Engineers / developers who are less naive to the world's perils likely end up better off as a result of it.

bazza Silver badge

Re: How many western governments use Linux?

It's more significantly phrased as, "how many western economies are depenedent on Linux?". I'm thinking banks, businesses, etc.

All of them, basically, one way or other.

Remember the attempted attack earlier this year on OpenSSH via a social engineering attack on a dependency that inserted a back door into OpenSSH? That ought to have been a bit of a wake up call. OSS projects are especially vulnerable to such attacks, but proprietary code isn't immune. Moves such as this one by Linus are, to some extent, "security theatre", in that it's not a foolproof means of achieving the desired effect. If there are enough layers of "security theatre", it does become an impediment; just look at airport / aviation security...

Ultimately, code review - excruciatingly detailed code review (and review of the build system for the code) by a trusted person / team is what does give strong assurance, accompanied by strong signing of the reviewed code to prove it's not been changed since. Though as everyone knows that's preciesly what very few people want to do.

It's also important to remember that - qualitatively, motivationally - there can be no difference between an exploitable bug and a deliberate back door. It's important with that in mind to maintain a balanced view as to who is and is not a bad actor (most bugs are just innocent bugs, probably)!

There are ways of addressing such things by changing choices of technology. There's been a plethora of attacks / vulnerabilities due to interfaces not validating data properly. HEARTBLEED springs to mind. Yet, we have plenty of technologies where interfaces can be wholly schema defined, machine built, with input / output validation automatically included. And then there's language; writing in Rust is clearly better (from a bug point of view) than writing in C/C++. OSS projects could make better tech choices to give themselves a chance of automatically avoiding mistakes, meaning that whatever review effort is available can be more efficiently used.

Boeing strike continues as union rejects contract, scuttling CEO's recovery plan

bazza Silver badge

They don’t need to buy Boeing. Embraer are perfectly capable of designing and building their own models and indeed are chatting to US airlines about a 737 replacement.

The problem with buying Boeing is that fixing it costs more than doing your own thing from scratch.

The problem for the US gov is that Embraer can do this, and Airbus can flex their muscles too, and the US exits the airline building business. The gov needs a plan to rescue or replace Boeing inside the USA that’s faster than Embraer and/or Airbus can act, and faster than the market will make those companies act. That is difficult to achieve.

Spectre flaws continue to haunt Intel and AMD as researchers find fresh attack method

bazza Silver badge

The problem with the proposition that there should be security cores, and that these are somehow carved off from the rest of the system, doesn't really work. As this article says, the technique used was happily able to access arbitrary kernel memory. Kernel memory is definitely something to protect (especially as some of it is what defines a process's run time privileges). But, you need tight coupling between applications and the kernel, because the kernel does so much for applications. All the IO, all the memory allocation, all the services provided by an Operating System. If the kernel is kept stuck on some sort of remote-ish or slow-ish CPU, application and system performance as a whole is going to be terrible.

The separation of Cores is the way to go, it's just that we need to wean ourselves off SMP, and move to architectures mores like, well, Transputers and Communicating Sequential Processes. Languages like Go implement this anyway (on top of SMP). That clear, physical, separation of different Cores and their memory from other cores and their memory is far easier to make "secure". Data is exchanged only by consent of the software running (and not simply by the CPU / caches / memory system because another process somewhere else has decided to try accessing that data). It's not perfect - what does one do about multiple processes on the same CPU? Transputers had an interesting hardware scheduler (not an OS scheduler), which maybe that's the approach to take (because, it'd be deterministic real time, and not influenced by software). It's likely an awful lot better than today's SMP. Unfortunately, it's a complete re-write of all software and OSes, and starting again on CPU architectures.

bazza Silver badge

Re: Isolation is hard

Er, the whole point of spectre and meltdown is that, yes, guests can contrive to get access beyond their bounds. This whole article is about just such a process gaining access to arbitrary kernel memory, which is pretty terrifying.

Tesla FSD faces yet another probe after fatal low-visibility crash

bazza Silver badge

Re: Gimme The Sensors!

>If a few are good, more must be better, and too many should be just about enough...

Not necessarily.

The trouble comes in the weightings one gives to each sensor; something like a LIDAR might on the face of it be really reliable, a RADAR less so. The trouble is that none of the sensors are totally reliable, and a "good" one that gets fooled outweighs the "less good" ones that are screaming "stop!!!".

There's a lot of similarities with biometrics; a mix of sensors of varying performance, and a conflict of requirements. A biometrics system wants to be good at recognising the right person, and good at rejecting the wrong person. With non-AI weighted combinations of sensor outputs, it's mathematically possible to optimise such systems for both requirements. And, as all AI systems are doing is adding things up with weights, they're also somewhat conflicted. And, if it comes to that, humans suffer the same problem.

And with FSD, there is also conflicting requirements. The car's got to go when it can, but must stop when it should. You can't make it too keen to go elsewise it'll be ignoring red lights, and you can't make it too hesitant.

Tesla's FSD

The proof as ever is in the results. And I think it fair to say that Tesla's results are sub-par, and no where near being true FSD. Adding a LIDAR would probably help, but as Waymo in Phoenix know, that's not going to magically make everything work properly either.

Open source LLM tool primed to sniff out Python zero-days

bazza Silver badge

This vs Fuzzing

It's going to be interesting to see how this approach pans out compared to, say, fuzzing. It's the results that emerge that'll tell us that, and therefore it's notable that they already have results with this set up.

Microsoft crafts Rust hypervisor to power Azure workloads

bazza Silver badge

Are They Out To Eat Broadcomm's Lunch?

Just wondering if this has the potential to grow up as a viable alternative to VmWare. There's not that much difference between a Type 1 and Type 2 hypervisor, and it's not like MS hasn't got the right types of resources to build a Type 1 on the back of this. And, they've open sourced this one (thus far).

It could appeal to quite a few folk.

Critical default credential in Kubernetes Image Builder allows SSH root access

bazza Silver badge

Re: FFS

The thing that worries me about this kind of gross error is that it's highly unlikely to be the only such error. For something that should have been spotted in even the most cursory of reviews to have slipped through suggests, there is no review.

Boeing again delays the 777X – the plane that's supposed to turn things around

bazza Silver badge

Re: Monopoly?

The US government may not have much choice in the matter.

Trust in Regulators

There's other problems for Boeing, other than late delivery. The relationship between the FAA and the rest of the world's regulators took a serious knock over the 737 MAX crashes. Whilst the FAA has evidently been working hard to repair that international relationship, Boeing then proceeded to not fix their problems, and then let a door fall off an Alaskan Airlines MAX. Since then, the FAA has been heavily involved in how Boeing is run. The certification flights for the 777X are the kind of thing that are supposed to commence, when everything else has been thoroughly gone over.

However, the thrust links broke, risking engines departing the airframe. Technically, that counts as a "near miss", because these aircraft have flown to and done demo flights at air shows all over the world, all while the engines were not as thoroughly attached to the airframe as they should have been. That's basic stuff, something that should have been worked out and pummelled on the ground, something that should have been done before ever a prototype flew. And this has also happened on the FAA's watch.

FAA Could Lose Its Teeth

And now, Elon Musk is talking about getting the Supreme Court to reign in the FAA and prevent them from fining companies. If he achieves that, the FAA is in effect no long a regulator; they cannot "force" any company to do any thing, as there are no actual sanction they can apply. For example, if the FAA withdrew Boeing's manufacturing certificate, Boeing could say "we're carrying on anyway", and the FAA would have no means of making that choice painful for Boeing to pursue.

Dilemma for Overseas Regulators

So the question overseas regulators are still having to ask themselves is, is the combination of the FAA and Boeing capable of delivering safe aircraft? The answer is most definitely not an unequivocal "yes". That then puts the regulators like EASA, CAA, CAAC, in an awkward and difficult position.

To allow Boeings to continue flying over their territories, the regulator staff themselves are taking a personal risk. The buck really does stop with them, and them alone. If they're accepting FAA certifications when there's clear evidence that Boeings are not up to standard, there's the prospect of a negligence case being brought against them. One way forward is that (for example) the EASA were to insist that Boeing certify their aircraft through the EASA. That would kill Boeing, because suddenly the certification costs of their aircraft would be doubled.

Impact on Boeing's Order Book

So the scene is set for Boeing to lose its international market, if the company / FAA continue to let slip-ups through to flying aircraft. If that were to happen, the majority of Boeing's order book evaporates. And there's no way the US gov could keep the civil part of BCA operating whilst being able to fulfil only US domestic orders.

There's nothing the US gov can do about that. If the EASA says "the game's up" to the FAA, the US gov cannot order, oblige, instruct, coerce or influence the EASA to change their mind. The EASA is a EU commission body, and there is no real diplomatic means to apply pressure to the EASA.

Political Failings

This is all the result of the US politicians of both parties over the decades having denuded the FAA of the necessary resources to be able to keep control of a company run by MBAs who have no inkling that the regulatory compliance is a market expander, not a cost to be minimised. It wouldn't have been a problem had the motivations of the company management been benign, naturally compliant. But, it wasn't.

The warning signs have been there for at least 2 decades, but the politicians didn't listen to advice.

If the US government wants a guarantee that the USA will remain in the airliner designing / manufacturing business, it either needed to have taken control of Boeing about 15, 20 years ago, or it could today ask Airbus to become a global monopoly with a large presence in the USA.

Industry Will Not Wait for Government

Even here the US government has comparatively little influence. The types of company that supply both Boeing and Airbus are already talking to Airbus, and it's no secret that some airlines would order even more Airbuses if there were even the remotest chance of Airbus being able to build them. They're doing this because, at the moment, Boeing aren't ordering anything (whilst the strike is on), and the suppliers are desperate to sell product. Airbus could be driven into being a monopoly by the rest of the market.

In these circumstance, the US gov would have to hope that some of Airbus ended up remaining in Mobile, Alabama.

You're right not to rush into running AMD, Intel's new manycore monster CPUs

bazza Silver badge

Re: Many cores on power-limited package = poor single-thread performance?

These very high core count CPUs have become possible simply because the silicon process used to manufacture them lays down very power efficient transistors. The result is a lot of cores that can all run all at once somewhere near (or at) full bore and produce only 200Watts of heat.

It's also allowed for more things like memory controllers, cache, to be integrated on the same die(s) to help keep the cores fed.

Both Intel and AMD have been pretty successful at judging a good balance between thermals, core count, cache spec, memory bandwidth, etc for the "average" compute workload, with AMD benefitting significantly in this quest for balance thanks to TSMC's very good silicon process.

It's a good question, is this not what GPUs are for? Well, there is the already given answer that GPUs are good for vector processing (so they're not well suited for general purpose compute). But CPU cores these days are also pretty well equipped with their own vector (SIMD) units, with extensions like AVX512. It's not clear cut that GPUs always win on vector processing.

CPUs are very well suited to stream processing. GPUs typically have to be loaded with data transferred from CPU RAM via PCIe, the GPU then does its number crunching (in the blink of an eye), and then the result has to be DMA'ed back to CPU RAM in order for the application to deal with the result. The load / unload time is quite a penalty. Whereas one can DMA data into a CPU's RAM whilst the CPU is busily processing data elsewhere in its RAM. Provided the overall memory pressure fits within the RAM's bandwidth, the CPU can be kept busy all the time. This quite often means that the GPU isn't the "fastest" way of processing data.

One good example is that of the world's supercomputers, machines such as Fugaku and the K Machine - which are purely CPU based - often achieve sustained compute performance close to their benchmark scores; they cope well with data streams. The GPU based supers are also good, but only for problems where you can load data, do an awful lot of sums on that data before moving on through the input data set.

This is why NVidia have NVLink, to help join networks of GPUs together without total reliance on CPU hosts doing the data transfers for them.