* Posts by thames

1410 publicly visible posts • joined 4 Sep 2014

Page:

Atomic Britain: UK plans regulatory reset to boost nuclear power

thames Silver badge

Re: "Planning"

One common example in the US is that the main train station in New York City would not meet the radiation limits required for a nuclear power plant. The building is made of granite and so is naturally radioactive to a degree higher than allowed for a nuclear power plant. This is apparently OK for a public building that millions of people go through, but not for a power plant.

There are probably plenty of historic buildings in the UK in a similar situation. Just look for anything made of granite.

thames Silver badge

Re: Half Right

Going by the timing, I suspect this is part of the list of requirements for rolling out the planned Rolls Royce Small Modular Reactors. There's no point in designing reactors which can be built and installed more quickly if the planning and consent process still drags on for years.

Nanny state discovers Linux, demands it check kids' IDs before booting

thames Silver badge

This is already part of Linux and has been for years. It's called OARS, which means Open Age Rating Service. It's used by Gnome Software, KDE Discover, and Flathub. It just doesn't become active until you install parental controls (provided by libmalcontent and malcontent-control). The information is stored in /var/lib/AccountsService/users/${user}

I haven't tried it myself.

thames Silver badge

What is the difference from existing Parental Controls?

"Linux" already has this in the form of Gnome Parental Controls. I'm not sure what is needed beyond perhaps changing the installer to ask if you want to include this optional package in the installation instead of you adding it later.

I've never tried it, but once you install it, it includes more features in the user account set up to do all the age related things a computer can realistically do.

So the parent installs Linux and is root and then sets up user accounts for each child as appropriate. Job done.

As for applying age related restrictions to someone with root access, good luck with that one. By definition, root can do anything, including changing or replacing parts of the OS.

RSS dulls the pain of the modern web

thames Silver badge

Re: How to find the Register's RSS feeds?

Perhaps what The Register ought to be doing is to put the RSS feed symbol beside the social media options like a lot of web sites do. Considering the target audience of this site, hiding the RSS feed amounts to shooting yourself in the foot when it comes to attracting and maintaining readership.

thames Silver badge

Re: BBC repetition

I'm not sure how the BBC does it, but the CBC will re-post a story if it has been updated.

The CBC also offer multiple feeds categorized by subject, and the same story can appear in different feeds. The story will appear once in each feed (it's possible that my feed reader is handling this), but may appear in several feeds.

My feed reader (Liferea) allows me to put related stories in folders, and at the folder level only one copy of the story will be shown even if it is in multiple feeds. When I mark it as read, it is automatically marked in all of the feeds in that folder and so disappears with one click.

You may wish to try looking at different feed readers to see how they handle this situation. A lot of feed readers seem to only be intended to handle a small number of feeds. If you are using Linux, I can strongly recommend Liferea. There's really no alternative that I have seen which has the same depth of capability.

thames Silver badge

Re: Thanks to El Reg for the RSS

I've been using Liferea on Ubuntu for as long as I can remember. I can recommend it very strongly. I dedicate one one workspace to it and run it full screen all the time. I cannot imagine doing without it.

TerraPower gets permission to build, not operate, sodium-cooled nuclear reactor

thames Silver badge

Re: Natrium????

It's just product branding. They can't trademark the word "sodium" in connection with nuclear reactors as it has been used in connection with reactors for decades. They had to come up with something else, and this was it.

thames Silver badge

Re: 60%

Maybe TerraPower should offer to buy it.

thames Silver badge

They've been held back by the lack of fuel. The only commercial source of the special fuel was Russia, and that ended up off the table due to the Ukraine war. As a result, they needed to develop a source in the US, and that has taken a long time.

The requirement for special fuel is one of the major drawbacks of their design.

Meanwhile competitors based on conventional designs are already under construction and will use standard commercial fuel and cost less to build and have multiple utilities in several countries signed up as customers.

TerraPower meanwhile has plans for a reactor in Wyoming and for Facebook, an unproven design, and uses a proprietary fuel. It's hard to see just what attraction TerraPower's design really offers to utilities.

I suspect there is a very strong chance that the reactor will never be completed.

Brit dual nationals grounded by border digitization drive

thames Silver badge

UK voters complain about immigration, so the UK government respond by making it more difficult for British citizens to enter the country. That's a rather novel way of doing things.

thames Silver badge

There's well over half a million UK citizens in Canada, and it's only just hit the news here. The UK have done virtually nothing to publicize this so far as I can see. So far as I can see it has been an utter shambles.

Apparently the UK will now (it's just been announced) allow New Zealand - UK dual citizens to get a life-long stamp in their New Zealand passports which covers all of this. So it's not like there is any sort of technical reason for their decision. If they had simply done this for countries which had a lot of UK dual citizens (e.g. Canada, Australia) who visit the UK regularly as tourists, and publicized it adequately, then the size of the problem would be far smaller.

Google Antigravity falls to Earth under OpenClaw-fueled compute load

thames Silver badge

When the shoe is on the other foot.

Let's see, AI companies acting as "agents" accessing other services on behalf of their clients is a good thing and is the future of the world.

But when other people want to have software act as agents to access AI services on behalf of their clients or users, then that's "malicious activity".

It's fascinating how the whole AI gambit revolves around people not doing to the AI companies what the AI companies say they have a right to do to everyone else.

This is yet another sign that the whole industry as it exists now is built on sand.

6,000 execs struggle to find the AI productivity boom

thames Silver badge

1.4 percent is statistical noise.

There was a famous experiment done during WWII in which an industrial engineer was trying to measure the effects of changing factory lighting on worker productivity. However, he could not find an optimal lighting level. There seemed to be no pattern to how the inputs affected the outputs.

Eventually he figured out that any change produced a temporary increase in productivity for psychological reasons. However, that increase faded away and returned to its original level after a period of time as people became used to the new conditions. Within certain reasonable limits, the actual lighting level had no effect.

This is an example of how productivity measurements need to have a scientific basis or they're worthless. The exception is when the productivity increases are so large and unmistakable that there is no question there has been an improvement. We're not seeing that with the published AI data. It'd down in the statistical noise levels.

I have a long background in factory automation and every productivity project that I've worked on had a clear and measurable goal measured in terms of actual money, and success was based on meeting targets. If productivity increased but by enough to meet the minimum return on investment criteria, then the project was considered a money losing failure.

I haven't been seeing this rigorous measurement with AI. People are throwing AI at their offices with no clear idea of what they hope to achieve or any rigorous method of measuring it. It's farcical.

thames Silver badge

Re: remember six sigma ?

Or more likely, the 10 percent of businesses that say they had an increase in productivity don't have a good way of measuring productivity due to AI and if they could measure would find that they weren't seeing any more improvement than the other 90 percent.

The first thing you have to do before you can measure an increase in productivity is to be able to measure current productivity. Most businesses can't do that with any degree of accuracy when it comes to white collar office jobs. A lot of companies seem to be relying on surveys of employees to ask them if they "feel" more productive. That is such an unscientific measure that it's laughable.

A 1.4% increase in productivity for office jobs is quite frankly statistical noise.

If there were any genuine success stories the management consultancies would be publishing loads of case studies showing double digit productivity increases year after year as AI reaches ever further into businesses. We're not seeing them. The only scientifically based studies that I have seen are showing no or even negative productivity effects.

The reason that LLM AI is being called a bubble is not because the technology doesn't work at all. It's being called a bubble because the useful results do not justify the amount of money being sunk into it. The mountain has groaned and brought forth a mouse.

Amazon-backed X-Energy gets green light for mini reactor fuel production

thames Silver badge

Re: hydrogen for helium gas

Gas cooled reactors typically use helium because it is non-corrosive and has good heat transfer properties. High temperature hydrogen would be fairly corrosive I would imagine.

The UK were the first country to commercialize gas cooled reactors on a big scale with the Magnox (the first commercial nuclear reactors to supply power to the grid) and AGR designs. These used CO2 because at the time the US had a monopoly on helium production and it was seen as being too risky to rely on them for a supply. There are of course more commercial sources of helium now so that is less of an issue.

thames Silver badge

Not quite

El Reg said: " Amazon became one of the first to embrace the yet unproven tech in late 2024 ..."

The US are probably a couple of decades behind the Chinese in terms of commercializing this technology.

X-Energy's specific design may be unproven, but the basic technology is far from new. X-Energy's reactor is based on 1960s era German technology. Two different pebble bed reactors were built in Germany and operated from the 1960s to the 1980s. The first was a 15 MW prototype, the second was a 300 MW commercial demonstration reactor. We would call this an SMR now and power plants were to be built of multiple small reactors. The 300 MW design used a uranium-thorium fuel mixture by the way.

At the end of the 1980s Germany pulled the plug on development (largely due to anti-nuclear sentiment) and the tech was licensed to separate teams in South Africa and China.

Eventually South Africa pulled the plug. X-Energy then hired the development team and so the US reactor is based on the South African derivative of the German technology.

Meanwhile development proceeded in China, initially with help from the German developers, and they built their first prototype in the 1990s. They continued work on this and in the 2010s started commercialization. There is now a power plant in commercial operation in China based on this technology. This is the SMR which people refer to as being one of the first SMRs to go into operation anywhere. It uses two 100 MW reactors driving a common 210 MW steam turbine.

The Chinese plan to use to use this design to replace coal fired power plants in the interior. For plants powering the major cities on the coast however, they plan to stick to conventional PWR based designs.I suspect this split is either related to availability of cooling water or simply reflects the relative size of electric power demand in different areas.

So what X-Energy are trying to do is not exactly new. The big questions revolve around whether it can compete with water based reactors given its higher costs for its complex fuel.

There is also the issue of where they will get the uranium from. They use uranium enriched to 15.5%, as opposed to less than 5% for normal commercial fuel or 8.5% as used in the Chinese SMRs. The US had been importing this from Russia, and efforts to start production in the US have not been going well with a huge backlog in orders. X-Energy may be able to build the reactors, but deployments may be limited by availability and cost of their proprietary fuel.

How AI could eat itself: Competitors can probe models to steal their secrets and clone them

thames Silver badge

Re: Pot, kettle, all black

The entire Internet is having to implement measures to try to stop the American AI companies from stealing all their stuff and grinding their servers into the dust in the process, but that's OK.

But if someone does the same to them, then that is bad, really, really bad and it must be stopped.

I have zero sympathy for them.

US is moving ahead with colocated nukes and datacenters

thames Silver badge

Re: 50% efficiency

It outputs 200 MW of heat, which it converts to 60 MW of electricity. This is 30% efficiency, which is below that of typical reactors, which are in the mid thirties range. This is dictated by operating temperature of the steam cycle. You need much higher temperatures to get more efficiency. I think the maximum possible efficiency of a rankine steam cycle is somewhere in the low to mid forties, and you need very high temperature steam for that. Fuel costs are not a big proportion of typical nuclear plants, so there isn't a lot of incentive to push the engineering limits here.

It also outputs 60 MW of cooling. However, they are very vague about this, so I would hesitate to speculate on what they may mean here, as they don't show any direct connection between the refrigeration plant and the reactor or steam system.

Overall though, the basic concept is fairly conservative. It's a scaled down PWR using normal commercial fuel enriched below 5%. The notable features are its size and the long time between refuelling (about 6 years).

With SMRs there is a balance between economies of scale and economies of replication. That is, there are savings to be made from greater size but these must be balanced against savings to be made form serial factory production of smaller units. Most engineering and economic studies suggest the trade off point is at about 300 to 500 MW or so.

The problem these data centre specific reactors may face is they may be too small to hit the minimum operating cost point and so face higher electricity costs than data centres which locate in places where they can buy electricity from the grid.

Of course, once the AI bubble pops all of this will be a moot point.

AI agent seemingly tries to shame open source developer for rejected pull request

thames Silver badge

I'll bet it's a public relations gimick

I bet the real story is that there is a human behind it and this is some sort of publicity gimmick intended to promote someone's AI product. Push some slop at a project, get rejected, and then direct the AI bot to respond with abuse towards the maintainer in order to give the illusion that an AI bot is just like a real person. Just look at the name - "crabby rathburn" - they're not even trying to hide it.

The intent is probably to create a narrative that "AI bots are real people" in order to sell people on the idea that they are ready to fill the same roles as real people.

It's all too easy for someone to direct their bot to do something that is normally unacceptable and then to say "but it wasn't me it was the AI bot that did it".

If a bot does something unacceptable, then ban the bot from your projects forever. If you can find out who is behind the bot, then ban all of the bots associated with those people forever, and ban the people themselves for as long as you feel justified. People are only going to behave if they face consequences for their actions.

Anthropic's Claude Opus 4.6 spends $20K trying to write a C compiler

thames Silver badge

Yes, the actual blog post mentions that they relied upon having a very extensive and high quality set of tests developed by other people. They also used GCC extensively so that when the output of their own compiler didn't work they could compile most of it with GCC and a small part of it with the AI compiler and do this repeatedly to narrow down where the problem was.

So in order to use this methodology, you need to have a very complete set of known good high quality tests and a high quality known good compiler. Once you have those you can then create a poor quality compiler using AI.

The author said he was doing it to try to push Claude Code to its limits to see where it would fail.

In terms of developing practical applications however, it's fairly useless unless your goal is to clone a project which has very comprehensive and high quality tests and which allows you to slowly substitute the AI's work in place of the original so you can incrementally find the failure points.

There's not a lot of software out there that meets those criteria and it's of no help at all in terms of creating genuinely new software. It's basically a tool for cloning existing software under a new license while possibly skirting around copyright law.

Britain courts private cash to fund 'golden age' of nuclear-powered AI

thames Silver badge

AI companies will talk about using nuclear, but actual SMR projects getting built so far are mainly being driven by net-zero targets. Utilities have been running the numbers, and the only viable way of reaching those targets is with nuclear powered base load.

There are four 300 MW SMRs currently under construction just east of Toronto, and the first one will be in service by the end of the decade. This is utility investment to meet general load growth.

Next-gen nuclear reactors safe enough to skip full environmental reviews, says Trump admin

thames Silver badge

Re: Highly enriched fuel

Civil grade uranium is limited to less than 20% enrichment. At 20% and above, this is considered to be military grade material and is more highly restricted. An actual bomb needs somewhere above 90%, the exact figures being a bit hard to come by.

This is why the highest enrichment level you see on civilian reactors, except for a handful of small, older research reactors, is less than 20% (e.g. typically 19.75%).

Typical light water moderated commercial power reactors use 3% to 5% for economic reasons. The third most common reactor type after PWR and BWR uses natural uranium at 0.7%. You don't need to enrich the uranium if you use a very efficient reactor design with a good moderator.

Plutonium has similar limits based on the different isotopes. The plutonium in spent fuel from commercial reactors is not suitable for making weapons as it does not have a high enough level of Pu-239 and unlike uranium, it is not practical to enrich plutonium. It can however be used as reactor fuel and there are a number of different fuel recycling technologies, depending on the reactor type it is to be used in. Weapons material is made in military reactors designed for the purpose. The first few Magnox reactors were actually military reactors which produced electric power as a byproduct.

thames Silver badge

Re: Dammit, I hope they choose the right location...

They are to be built at US nuclear research sites, which were mainly involved in the development and testing of nuclear weapons.

They will still undergo review. There are three categories of review, and "categorical exclusion" is one of them. Essentially, the Department of Energy will review the design and if they determine that the risk of the release of hazardous materials is small enough, then they don't have to go through the same review process used when it is assumed that hazardous materials may be released.

thames Silver badge

Re: Highly enriched fuel

That is nonsense. The first Magnox natural uranium reactors output 60 MW of electrical power. This is in micro-reactor range, never mind SMR (the Rolls Royce SMR is close to 500 MW of output) and definitely not "large".

The first Canadian power generating reactor was the NPD-2, built as a demonstration plant. It used natural uranium and generated 20 MW of power. This was the prototype for the CANDU series of natural uranium reactors.

French nuclear submarines use uranium enriched to 7%, which is not far above commercial power reactors. American submarines may use highly enriched uranium, but there is no technical reason relating to reactor size requiring it.

Most SMR designs use normal commercial fuel such as is used in larger reactor today.

Some SMRs use uranium enriched to just below 20% (which is still far below bomb grade). This is generally done in order to make the reactor smaller as the more concentrated fuel will output the same power in a smaller volume. This is typically done so they can transport the reactor in one piece within normal shipping dimensions.

GitHub ponders kill switch for pull requests to stop AI slop

thames Silver badge

AI Slop Jockeys versus reality

On the one hand we have the AI slop jockeys telling us that this this very thing is successfully used in proprietary code bases where we can't see the results.

On the other hand, when the evidence is openly and transparently visible, 90% of what AI does fails to meet even minimal standards and people are asking for ways to stop being inundated with AI slop.

Something is not adding up, and I suspect it's the people who are doing things behind closed doors who are the ones who have something to hide.

Sword of Damocles hangs over UK military’s Ajax as minister says back it or scrap it

thames Silver badge

Re: Additional reading

Whole Fleet Management is not working out well with other vehicles in terms of getting enough crew training, but has little to do with the problems which AJAX is having. Blaming H&S problems and manufacturing defects on WFM is quite frankly grasping at straws.

So WFM should go and crews should get assigned vehicles, but that won't do anything with respect to the problems with AJAX.

thames Silver badge

Re: Additional reading

The crews aren't spending a lot of time in their vehicles because of the constant problems with them. If you buy a brand new car and it's spent most of the time since then in the shop for repair, the problem isn't that you aren't driving it enough.

thames Silver badge

Re: Putting lipstick on a pig

AJAX is based on ASCOD and is built in Spain, with the interior being fitted out in Wales. So the ASCOD option is what is not working.

The US are in the process of replacing their M3 Bradley, with the decision being either the German Lynx, or what amounts to AJAX with a bigger gun. It would be very expensive and pointless to buy in on a platform which is nearing end of life. If you decide to just buy whatever the US decide to buy, that may end up being basically AJAX if they pick the US company (GD) over the German one (Rheinmetall).

The only realistic solutions are as stated in the article, CV90, Lynx, or a version of Boxer. The Germans already have a reconnaissance module (body) for Boxer, so that's off the shelf.

Future of UK's multibillion Ajax armored vehicle program looks shaky

thames Silver badge

Re: old boy network rides again

The "old boy network" decision would have been to buy the CV90, owned by well established British defence contractor BAE and built in their plant in Sweden. However, there was a perception within the MoD that too much work was going to one company so for AJAX the attitude, according to insiders, was "anyone but BAE".

As a result it went to General Dynamics, an American company without a strong foothold in the British armoured vehicle defence market, and who would build the vehicle in their plant in Spain.

This fiasco is very much the product of "the new boy" in the British defence market.

thames Silver badge

Re: The end is near for the AJAX armoured vehicle

AJAX is built in Spain by an American defence company, General Dynamics. It's given a final fitting out of the interior in a GD owned plant in Wales, but the problems are believed to be related to poor design engineering, appalling quality control of the hull and mechanical bits, and foot dragging inaction by GD before the vehicles reach the UK. There's not a lot of "UK defence manufacturing industry" involved in this.

AI hasn't delivered the profits it was hyped for, says Deloitte

thames Silver badge

Re: 40% of the workforce are paying attention & give a damn

I talked to someone in the banking industry recently. The bank uses AI to help compose emails. The AI pulls some templates on things the bank wants to sell customers at that time and knits them together, she modifies the results to suit what she thinks that particular customer should see, and then she sends the email. I imagine she's pretty typical of what people are using AI for in business. As to whether she's part of the 60% who use it "daily" or part of the 40% who use it less than daily isn't something that we got around to discussing. It's nice to have, but it doesn't really make a great deal of difference to what she does.

She (the banker) said by the way that the current market is in an AI bubble and people are going to get burned very badly if they don't recognize that.

I can remember when we first got access to Internet email at work. It made a huge difference to our ability to communicate with suppliers and partners around the world and increased the whole pace of business. Everyone was all over it as soon as we were allowed access to it and it was a real source of contention over how long it was taking the company to implement it (they had foolishly locked themselves into a proprietary service that they had to get out of).

I'm seeing nothing like that with AI in any of the companies that I have talked to about it. It's all a case of people saying they use it, but it doesn't make any real difference in terms of getting useful work done any faster or better.

thames Silver badge

I find it very difficult to believe that a manufacturing company with 7,000 suppliers didn't have an ERP system that did this already. Monitor stock levels and re-order when they fall below projected thresholds. This has been around for decades.

And out of all the possible customer uses, this is the one they seek to highlight as demonstrating the awe inspiring power of AI? This is beyond parody.

Nvidia leans on emulation to squeeze more HPC oomph from AI chips in race against AMD

thames Silver badge

I'm not really sure what your point is. As the article states, IEEE floating point exceptions are designed to work a very specific way, and common algorithms are often designed with those methods in mind. Many times those algorithms rely on floating point errors flowing through to the end rather than checking after every operation. You can count on 1.0 + NaN producing a result of NaN, so you can check for NaN later rather than beforehand if that gives better performance.

If an emulation of floating point math works differently however, such that 1.0 + NaN produces a result of something other than NaN, then many well proven math libraries and algorithms may not work correctly with it in terms of error handling.

If you have an algorithm which requires checking at certain steps in the process, then that's part of that algorithm. However, you will still have the problem that if the emulation system doesn't behave the way you expected then you can get errors that your error checking doesn't know how to look for.

Hence my conclusion Ozaki will only be useful in very specific hand coded libraries handling very specific algorithms for very specific applications.

thames Silver badge

Re: "By the 1980s, FPUs were becoming commonplace"

Intel used to sell a separate math coprocessor floating point accelerator chip which you could buy and install in your PC. So the 8086/8088 for example had a corresponding 8087, and the 80286 and 80386 had corresponding 80287 and 80387 math chips.

The big market for the 8087 was people running Lotus-123. Applications had to be written specifically to recognize that the 8087 was present and to make use of it, and Lotus 123 was one of the few which did. Since Lotus 123 completely dominated the business spreadsheet market, and since spreadsheets were one of the main uses for PCs, the Lotus market was closely associated with 8087 sales. If I recall correctly, you could buy a package which included both Lotus-123 and an 8087 chip together. I don't know if that was direct from Lotus however, or if it was something that distributors put together.

I was under the impression though that the 80486 had floating point math built in as standard. The 486SX actually disabled the on chip floating point unit so they could sell it at a lower price without affecting sales of the higher priced standard 80486. If you then bought a 487SX and installed it later, it actually was a full 486 chip which disabled the 486SX and took over all of the CPU duties.

thames Silver badge

Being compliant with things like NaN (Not a Number) and +/- infinity is actually pretty important with floating point. I have done a fair bit of work with floating point SIMD (CPU based, not GPU) on large arrays of data and the "proper" way to deal with errors in most cases is to let them flow through to the end and check for them then rather than to check as you go along. NaN and infinity handling is designed so that once you get one of them as a result it continues to propagate through the math. Doing the check at the end results in insignificant error checking overhead, while doing it as you go along results in a lot of overhead and a significant performance hit.

What this means is that if you emulate floating point, if you don't handle NaN and infinity the same way as is "normal", people may have to come up with entirely new algorithms at the application level. It also means that well proven math libraries may work most of the time under emulation, but give incorrect results for edge cases. Figuring out what those edge cases are is non-trivial once you are dealing with applications rather than benchmarks.

The performance advantages of having errors flow through in a predictable manner are so great that as I understand it, some of the hardware people at CPU companies are talking about introducing similar features for integer math to deal with overflow, although I have no idea how this would work. Traditional simple overflow trapping apparently is somewhat problematic with instruction pipelines, instruction re-ordering, and SIMD.

As for this emulation system, I can't realistically see it being used outside of very specific hand coded libraries handling very specific algorithms for very specific applications. They're really competing against SIMD, and the latter has been getting better as well.

Hyperscalers, vendors funding trillion dollar AI spree, but users will have to pay up long term

thames Silver badge

Re: "I have another 20 years to monetize that customer,"

You're talking about hardware costs. He is talking about service contract lock-in. He is assuming that LLM AI will increase vendor lock-in. Salesforce hope to be able to increase prices at will for locked-in customers, who will find it very difficult to duplicate those custom features with another vendor.

Have a look at VMWare's business strategy, which is squeeze locked-in customers until their pips squeak. That's the plan for the AI business model.

How CP/M-86's delay handed Microsoft the keys to the kingdom

thames Silver badge

Re: Siemens S5-DOS/MT

Siemens was and is a big company. They made everything from mainframes to nuclear reactors. One of their biggest divisions was and is industrial controls, where they were the world leader.

There are various sources of information on line, but I'll stick to the known safe ones (as opposed to ones offering who knows what in terms of downloads) such as Wikipedia.

Here's an article on the Simatic product line, which was Siemens' name for their programmable industrial controls.

https://en.wikipedia.org/wiki/Simatic

Here's a photo of the PG 675 computer which is better than the PG 685 used in the article, even if it is a bit grubby. "PG" was Siemens' term for the programming computer.

https://commons.wikimedia.org/wiki/File:Siemens_Simatic_S5_PG_675.jpg

In the main article, scroll down to the section on "Step 5". It mentions there that the OS for the PG 630 was CP/M.

If you scroll down to "History of STEP5", there is a table which shows that from v1.0 to v1.4 it ran on an unspecified version of CP/M. From 2.0 to 3.2 it ran on CP/M-86. Version v6.3 ran on MS-DOS on the PG750. I think the table is not complete, so there may be other version numbers.

I know however that you could also run STEP5 on an ordinary laptop under MS-DOS (or Windows 95). I also recall that the earlier versions ran on some sort of CP/M emulator in order to run on MS-DOS, but I don't know the details. The very late versions (I think somewhere around version 6 or 7) were ported directly to run natively on MS-DOS.

As for why Siemens used CP/M, when they first came out with the software MD-DOS didn't exist yet. CP/M on the other hand was the de facto standard operating for business microcomputers.

As for the PGs themselves, you will notice in the photo that they have an extra row of function keys below the standard F1 to F8. These have special symbols which are used by the STEP5 programming software. You will also notice a deep socket to the left of the floppy drives labelled "module". There is a corresponding socket on the photos of the S5-95U and S5-103U CPUs. This was for a ROM module which you could burn so you didn't need a backup battery to keep the program in RAM. Most people didn't bother with the ROM module and just changed the battery every year.

The PDF that you linked in the story for Siemens S5-DOS/MT is actually for the MS-DOS version of the programming software. I don't know if at this point it was running under an emulator or was a native MS-DOS port, but the host OS in that manual is definitely MS-DOS. If you go to chapter 5 it talks about the included utilities for reading and writing "PCP/M" floppy disks so you could exchange data between older PGs (which used CP/M) and newer ones running MS-DOS. I suspect these are licensed third party utilities. The manual by the way has a very good explanation of how to optimize memory management for MS-DOS. Siemens manuals from that era were excellent in terms of providing technical detail even if they did have a tendency to use their own names for things.

As for whether Siemens also sold FlexOS, I wouldn't dispute that. There are lots of different applications in industry, and you can outfit pretty much your entire plant with just Siemens control kit. That however would have been used in some sort of dedicated application rather than the type of programming PC such as I was describing.

They currently sell their own industrial Linux distro, based on Debian, to fill that niche.

https://support.industry.siemens.com/cs/document/109988870/simatic-industrial-os-4-2-%E2%80%93-the-operating-system-for-applications-in-the-industrial-environment?dti=0&lc=en-CA

thames Silver badge

Siemens S5-DOS/MT

The article mentions Siemens S5-DOS/MT. Industry ran on Programmable Logic Controllers (PLCs), and Siemens was the industry leader with the biggest market share. At that time their main product line was the S5 series, which covered the full range from small to large with different models and configurations. I did a lot of work writing programs for equipment controlled by various S5 models.

The programming software (development software) was STEP-5. This was what amounted to the editor and compiler all rolled into a single highly specialized graphical IDE if you were thinking in computer terms. STEP-5 was apparently written for CP/M. They sold portable computers intended specifically for programming the S5 series, including an interface card to connect to the PLC and a socket which you could use to burn their ROM modules (a big orange block which you inserted into the PLC).

For customers who wanted to just load the software onto their own computer instead of using a dedicated Siemens programming system, they offered a version of STEP-5 which ran on MS-DOS under some sort of CP/M emulator. I don't know the details of that as they were never very specific about it, so I don't know if it was something they created themselves or whether it was a third party product. Siemens were very big on relabelling licensed third party stuff with their own brand name.

Very late in its life they came out with a native MS-DOS version.

The successor to the S5 series was the S7, and the programming software for that was native Windows software. By that time Siemens were huge Microsoft fans and they were always pushing the latest fad from Microsoft into their products across the board, only to watch that feature become obsolete and replaced by something else. As a result there was a lot of product churn. We were able to mostly avoid that as we had a rather jaundiced eye when it came to using proprietary features in systems that were supposed to last a couple of decades.

Zuck forms Meta Compute to pave the planet with 'hundreds of gigawatts' of AI datacenters

thames Silver badge

Not much to show for it

El Reg said :"However, compared to its competitors, Meta doesn't have much to show for all of its spending."

Eh? none of the companies involved in the AI bubble have much to show for the amount of money they have sunk into this so far.

Furthermore, none have made any convincing arguments for how they are going to make a profit out of it in the future either. The capital and operating costs are simply too high for the rather meagre results the technology is capable of.

I suspect that before too long success will be measured in terms of who failed quickly enough to avoid sinking too much money into it.

Imagine there's no AI. It's easy if you try

thames Silver badge

Re: That's not survival. It is an unnecessary nightmare.

I could see structural batteries in things like high end phones and tablets to make them a tiny bit smaller or have a slightly larger battery. Phone cases aren't really that strong anyway.

It doesn't make much sense in cars though. The structural parts of a car are designed for strength and the trend there is for stronger steels to reduce weight. You aren't likely to be able to create a battery material which can match high strength steel when it comes to strength and durability. You also have the problem of getting electric power from all over the frame to the motor power pack. A car is big enough that you can always find places to put the battery.

Cars are big objects so it's always going to be worth while having the frame be the best possible frame and the battery be the best possible battery and so have these as separate components.

It's when you get to small objects like phones where small size is important that combining functions makes sense.

Microsoft wants to replace its entire C and C++ codebase, perhaps by 2030

thames Silver badge

One more thing

This is one more thing to add to your list of "things that are not going to happen".

Microsoft have yet to demonstrate replacing even one significant program that they derive revenue from by using AI to rewrite something that was in C++ to something written entirely in Rust. They have re-written bits of programs in Rust, but that's it.

One programmer is not going to produce 1 million lines of finished code every month, AI or no AI. Of course they may get an AI to submit variations on the same 10 lines of code 100,000 times in an effort to hit a numerical target, but that isn't going to get them to an all-Rust program.

What they are doing sounds more like a desperate effort to promote customer use of AI than a serious goal for themselves.

US punishes China’s ‘dominance’ of legacy chips with zero percent tariffs

thames Silver badge

Re: The result of an investigation into China’s semiconductor industry

There's several things going on. One is that the main market for these older chips is in low cost electronic goods, of which China is the biggest manufacturer. The chip plants are located where their customers are. There's a huge market for this stuff because it's cheap and good enough.

The other thing is that US has spent a great deal of effort in preventing Western companies from selling the latest chip making equipment to Chinese companies, only allowing stuff for making older generation chips of the type we are talking about here. So, Chinese chip companies had little choice except to stick to making lower end chips based on older chip technology. Washington patted themselves on the back of this brilliant success.

Then the US observed that Chinese companies were making lots of older generation chips. The American government studied this carefully and decided that this focus on older generation chip technology instead of buying the latest Western equipment was due to some deeply laid plan which the Chinese government have cooked up.

I'm sure this US strategy makes sense to someone.

All aglow about DCs, investors launch $300M at microreactor startup

thames Silver badge

It's not really an SMR, it's what is known as a "micro-reactor". If you go to their web site they say that it is intended to replace diesel generators which currently power remote communities and mines. This is why it's built as a single container size unit.

There's a whole separate market for micro-reactors. They are competing against diesel generators in terms of cost, and for diesel the big cost is often that of shipping the fuel in. In many places these shipments take place once per year.

Numerous attempts at using wind and solar in these applications have generally been unsuccessful. Due to their intermittency, they can't actually replace diesel, just supplement it. Due to their rapid variability (fluctuations over minutes or even seconds) they can only provide a small percentage of the total power (10 to 15 percent is typical in various projects in Canada, even with storage). There is no huge grid to absorb variability and diesels are limited in their ramp rates without damaging them, so you can only use them to compensate for the variability of wind and solar to a very small degree limiting how much wind or solar you can have. You also can't have diesels run at idle for extensive periods of time without carboning up inside. You need to put a minimum load on them.

Also fuel efficiency of diesels drops with reduced load, so your fuel savings are much less than the amount of power generated by wind or solar. On top of that, saving a few percent of fuel may not save you any money because much of the cost is shipping, and you have to have that same barge or ice road shipment come each year regardless of how much they deliver.

The ideal solution is hydro electric, but that is site specific, and building very long transmission lines is not practical if there isn't a suitable site nearby. This is why it isn't used much for small communities.

What micro-reactors offer is something that can completely replace diesels, something that wind and solar have not been able to do despite very large sums of money having been invested in numerous projects for this. There are several hundred communities and mines in Canada alone which depend on diesel and which are a potential market for micro-reactors.

The company in this story have contracts to supply the US military with micro-reactors to power remote bases. For many remote US bases, the main logistics burden is shipping diesel.

As to whether this company's product works as good as they hope it will, nobody knows yet. They hope to build a fuel demonstration project next year. That apparently won't produce electric power, just test if their proprietary fuel design works like the simulations say it will.

As for using these reactors to power huge AI data centres, I don't see it being practical. They operate on a completely different scale. I suspect the AI industry interest in this sort of technology is as a form of green washing, just like with that study that claims that wind can power data centres. When you dig into the latter it turns out that the actual plan is actually to have a big gas turbine on site plus a promise to buy some wind power from offshore wind farms located somewhere. With these micro reactors, if the AI money is still around when they are ready for market, then I suspect they will buy a couple and issue press releases about them while actually getting most of their power from gas turbines.

So as far as this reactor is concerned, it looks interesting if used for what its designers intended it for. It's not meant for the AI industry however.

FreeBSD 15 trims legacy fat and revamps how OS is built

thames Silver badge

Tried it

I installed FreeBSD 15.0 on Tuesday and have had zero problems with it. I run it in a KVM VM on Ubuntu 24.04. I run it as a server without a GUI and gave it 1 GB of RAM and it seems quite happy with that running my usual test routines (I use it to run automated software tests).

The one big surprise was that it is still on Python 3.11 when every other major distro seems to have 3.12 or later (the latest version of Python is 3.14). Python 3.11 is more than half way through it's support life, no longer receives bug fixes, and drops out of all support, including security fixes, in two years.

Distrowatch says that 15.0 ships with Python 3.12, but my test scripts are clearly reporting 3.11.13. I have to wonder if something went wrong here.

US Navy scuttles Constellation frigate program for being too slow for tomorrow's threats

thames Silver badge

Re: This isn't to speed up delivery to the fleet

All or nearly all new frigates and destroyers are powered by either a combination of diesel and gas turbine, or just diesel. The ships cruise on diesel for economical operation and gas turbines are fired up for high speed operation. The newest ones tend to have an electric drive system, where the diesels and gas turbines drive generators and the generators feed electric motors which turn the propellers. Any engine or combination of them can feed any electric motor. On the slightly older ones the gas turbines may be able to be coupled directly to the propeller shafts when needed instead of going through the electric drive system, but they only are used when high speed is needed.

The replacements for the Burke class destroyers will look fairly similar to the Constellation class ships, but much bigger and with more vertical launch cells for missiles.

The differences between ships these days are mainly in the details. These include things like damage control and fire fighting arrangements, size of magazines, and the electronics fitted. The combat systems can cost more than the rest of the ship put together.

The Burke replacement will apparently displace about 14,500 tons and have 96 vertical launch cells, as compared to half that displacement and a third of the number of vertical launch cells for the Constellation class.

The Constellations were supposed to be an off the shelf quick fix for the failed LCS program. However, the US ended up making so many changes that only about 15 percent of the original FREMM design was left by the time they were done with it. This sort of defeats the whole purpose.

Because the Burkes (and planned replacements) are so large and expensive, the US wanted a smaller frigate to give them the numbers to cover places in the world where attention is needed but the threat level is lower. They also wanted them quickly to make up for the time lost pursuing the LCS dead end.

It's hard to say where things are going now, unless someone has a replacement already lined up which is ready to build. Given how that sort of thing tends to leak in the US, I would be very surprised if that were the case however.

thames Silver badge

Re: You know what they really need...

Pretty much all modern naval vessels of intermediate size can take containerized mission systems. The main armament goes in permanent mounts, but the specialist stuff that a ship only needs occasionally goes in containers.

thames Silver badge

Re: This isn't to speed up delivery to the fleet

Canada chose the Type 26 over the FREMM because it wanted the best there was and was willing to pay for it. It was what the RCN wanted from the beginning and they were willing to write the rules to allow the Type 26 to be considered "off the shelf" even though construction hadn't started yet in the UK, they had that much confidence in the UK's ability to design ships. All of the ship designs looked at by Canada were European by the way, the US had absolutely nothing that was worth considering. The US have fallen quite badly behind in terms of naval ship design and construction methods.

The US chose the FREMM because they wanted something cheap and off the shelf to put into production immediately to fill the yawning void in their fleet. They need something cheap enough to be built in numbers that could be sent to secondary areas, as the Burke class and its planned successor were seen as too expensive to be used anywhere except as part of their front line fleet.

The British Type 31 with an Mk41 launch system and ESSM missiles instead of Sea Ceptor would be pretty much what the US were originally looking for before they decided to change everything.

US naval shipbuilding is an utter shambles, with major problems in their frigate, icebreaker, and submarine programs. They recently bought an icebreaker design from Canada and Finland to reboot their disastrous icebreaker program, we'll have to see if they completely stuff that up by redesigning everything as well. Australia's plans to buy some second hand US nuclear submarines to fill in the gap until the AUKUS subs hit the water are in severe doubt as the US cannot currently build submarines fast enough to replace the ones that they have to retire due to age, so they may have none to spare when the time comes.

By cancelling the Constellation class ships the US are simply digging themselves deeper into a hole they are already shoulder deep in, as they have nothing ready to replace it with.

Britain plots atomic reboot as datacenter demand surges

thames Silver badge

Re: Hardly makes us meatbags feel better ...

UK interest in reviving nuclear power predates the AI bubble. It's based on the goal of the total electrification of society in order to meet environmental goals.

What happened is that reality sunk in and people realized that there is no path to "net zero" which involves wind turbines. Wind turbines are joined at the hip to fossil fuels and will be forever. Solar panels are the same.

If you look at which countries in Europe have low carbon emissions in their electricity sector, it isn't the ones which depend on wind/gas. It's the ones which depend on hydro-electric and/or nuclear.

So if the UK genuinely desires to save the environment, then it either needs to find a continental scale high mountainous plateau somewhere in say Norfolk and built hydro-electric plants there, or else build enough nuclear power plants to power Britain. The latter sounds like the more realistic plan.

thames Silver badge

Re: Good but ...

The fundamental issue is the structure of the electricity market. To start with, it isn't a natural market. It's an artificial construct which is intended to emulate a real market but is in reality a very convoluted system of regulation intended to produce a pre-determined outcome.

What it does is optimize for short term profits rather than long term low cost or for stability or reliability. Since there is no long term security, investors optimize for short term profits. This also means that long term investments have to pay higher interest costs because they have no guaranteed market. This is the real reason why the UK has built gas turbines / wind farms (the two are joined at the hip) rather than nuclear power plants. The real money is in the subsidies and offsets, producing reliable supplies of electricity at the lowest possible cost is a mug's game.

What the UK needs to do is to admit that the "deregulated" electricity market experiment has failed and move on from it.

Page: