* Posts by thames

1192 publicly visible posts • joined 4 Sep 2014

Page:

How datacenters use water – and why kicking the habit is nearly impossible

thames

Re: just a thought

In Toronto the district heating system also provides cold water for air conditioning in summer. The municipal water system draws water from several kilometres offshore deep in Lake Ontario where the water stays cool (4 C) year round. This water is then run through heat exchangers to cool the water in the district air conditioning loops. This then provides air conditioning to several hundred buildings, saving about 75 per cent of the electricity which would otherwise be required for air conditioning. The water being used for cooling is already being drawn for municipal water supply anyway, so there's no additional water being used.

Toronto also gets the majority of its electricity from nuclear power, with more plants being built close by. The nuclear power plants also draw their cooling water from either Lake Ontario or Lake Huron. The Great Lakes are large enough that the amount of heat being discharged into them is insignificant compared to their size. In winter only a very small area outside of the cooling discharge outlet doesn't freeze, so it's easy to see that the amount of heat involved may be large on a human scale but is very small on a geographic scale.

If data centres need lots of electricity and cooling water then they need to start locating in places where these resources are abundant and stop building in places where they are lacking. As can be seen with Toronto (natural cooling water and nuclear power), there are solutions and they are practical even if they are foreign to Silicon Valley.

Boffins carve up C so code can be converted to Rust

thames

Re: “Minimal adjustments”

The HACL conversion to Mini-C required only minimal code changes, and the EverParse conversion required no source code changes at all. Just because the C language has some problematic features doesn't mean that your C program uses any of them.

So, hypothetically we could create a "Mini-C" compiler and add things like "fat pointers" and run-time bounds checks to it. Then see if our C program compiles with it and passes its tests. If it does, we're done without using Rust and with no source code changes, just a recompile.

If there are a few spots in the program which need features that Mini-C doesn't have, then factor those out as separate regular C files, and include them in the project, just like Rust does with "unsafe" code. We're then done without using Rust and with limited source code changes.

Perhaps Rust would handle a few corner cases better than a hypothetical Mini-C, but if a C variant can do 90 percent of what Rust provides it would be more likely to be widely adopted sooner and so provide more security in a practical sense when looking at the industry as a whole.

If just replacing existing code bases with a new language was simple, easy, and cheap, all that COBOL code out there would have long ago been turned into Java. That hasn't happened, and I similarly suspect that C will be around for a very long time to come and lots of new C code will be written.

Fission impossible? Meta wants up to 4GW of American atomic power for AI

thames

Re: SMRs are a scam

There are economies of scale to most electrical generating plants, regardless of the technology used. For nuclear reactors many of these are associated with the civil works such as site prep, roads, cooling water channels, grid connections, etc. This is why so many power plants have multiple units.

New "conventional" SMR designs have optimized their layouts to have reduced footprints in order to require smaller civil works, and also to use less concrete by making the reactor building more compact (so there is less to enclose).

Existing heavy water moderated reactors, such as the CANDU, which are in commercial operation around the world, can use thorium as fuel. India uses thorium as part of their fuel load in their CANDU derivatives. India do this mainly because they have lots of thorium but much less uranium, and they want to reduce their dependence on non-domestic supplies.

For most countries though, uranium is so cheap that it simply isn't economic to use the more complex and expensive thorium fuel. Reactors fuelled entirely with thorium will not "self-start". They need either enriched uranium or plutonium (reactor grade, not the special isotopes used in bombs) to start the reaction and create U-233 from thorium to run the main reaction.

At this time, it is cheaper to simply use abundant supplies of uranium in a once through fuel cycle than to set up the chemical reprocessing facilities to recycle spent fuel into mixed-oxide (MOX) uranium or thorium based fuel. France do it (for uranium based MOX), but that was a decision made based on national security to reduce reliance on uranium imports.

If uranium prices rise high enough, then there are already reactor designs that can use uranium or thorium based MOX fuels, and these reactors have been in large scale commercial operation in multiple countries around the world for decades. They just don't happen to be used in the US, which is why the US-centric media don't talk about them much.

thames

Re: Perhaps there's a ready solution.

The design of the 300 MW Hitachi SMRs which are being built in Canada at this time are also approved by the US NRC. The Canadian Nuclear Safety Commission (CNSC) and the US NRC shared information during the safety design review process. The US plan on building the same SMR in the US and there are MOUs between the two countries with regards to sharing of Canadian experience with this design with the US.

thames

Re: There''s SMRs, and then there's SMRs

It's notable that Rolls Royce's SMR is a 470 MW design, about the size of a typical late 1970s reactor, and it uses conventional fuel from commercial suppliers. Again, the emphasis was on the "modular" rather than the "small" aspect of SMR. The design is centred around how to assemble large factory built components with the minimum of time and work on site.It is an entirely different beast from the unconventional micro-SMR designs.

thames

There''s SMRs, and then there's SMRs

There are four SMRs already under construction just east of Toronto. These are 300 MW Hitachi designs, being constructed by a long established Canadian nuclear component supplier for a major utility. They are about the same size as the first generation of commercial reactors which were built in Canada (near by to these ones). They will use standard fuel from commercial suppliers. The economics and technology are not really an issue.

What is questionable is the very small SMRs being promoted by companies with no track record in the nuclear industry, no manufacturing facilities of their own, and which use specialized highly enriched fuel not available from existing commercial suppliers. These very small SMRs (100 MW or ever smaller) have very questionable economics as there are significant economies of scale in most power plants. They are simply too small and use very expensive non-conventional fuel.

If Meta want 1,000 to 4,000 MW, then three to a dozen or so 300 MW reactors could do the job for probably much less cost than the very small SMRs being promoted by some companies.

The whole point of a "small modular reactor" is the modularity, the "small" is just a way to get there. The idea is to build as much as possible in a factory and do the least amount of assembly on site. The company which can do that with the largest reactor will have the lowest operating costs.

What Meta should be doing is building their data centres in places which already have a good supply of electricity, instead of putting them in places with no electricity and then looking for someone to build power plants to serve them.

I expect that most of these micro-SMR companies will go out of business without having ever built an actual commercial reactor, while the companies offering ones in 300 MW and up sizes will be selling plenty to utilities.

Cryptocurrency policy under Trump: Lots of promises, few concrete plans

thames

To expand on what you said, "orphaned wells" are oil and gas wells where the previous owner went bankrupt and the bankruptcy trustees couldn't find another company anywhere who were willing to touch them because they're worthless, and indeed an overall liability. The wells that are actually worth operating get bought up by another oil company.

In Canada orphaned wells end up in the hands of the provincial government and the cost of sealing them is paid by a fund which all oil companies must pay into (in Alberta that fund is grossly under funded in terms of future liabilities).

The idea of a crypto-mining company somehow succeeding at operating oil and gas wells where the entire industry of actual oil companies looked at it and decided to pass is a bit odd, to say the least.

I find it utterly implausible that anyone promoting this idea actually believes what they are saying about it. The fact that someone is promoting it tells me what should be obvious to anyone, it's a scam.

Trump tariffs transform into bigger threats for Mexico, Canada than China

thames

Canada and the US have a long standing treaty which allows either country to send illegal immigrants back to the other. There was a loophole which allowed people to claim refugee status at official border crossings, but this was closed a couple of years ago with an new treaty. Now refugee claimants can get tossed back for the original party to deal with.

The new treaty was signed when Biden was going to visit Canada and was told that the number one issue the Canadian press were going to ask him about was the flood of illegal immigrants from the US crossing into Canada (it's mainly from the US to Canada) through the refugee loophole. Suddenly, a treaty which the US (including Trump) had for many years insisted was not possible for the US to sign was suddenly possible and got signed and approved in short order in time for the visit by Biden.

Prior to that the biggest issue that Canada had with the US was illegal immigrants coming from the US to Canada, while the Americans (including Trump) claimed they could do nothing about it. Canadian opposition politicians were demanding that Canada build a wall to stop it, although nobody was suggesting that the Americans pay for this one.

Most of the border is actually not that easy to cross illegally. Most if it is either in very remote areas, or lakes and rivers and mountains. Both are monitored closely. The border is where it is because it was a defensible line dating from a series of UK-French, and later US-Canada wars. Canada faced a long running invasion, insurgency, and terrorism threat from the US through most of the 19th century.

There is still a smuggling problem, with most of it being drugs and guns from the US being smuggled into Canada in shipments of goods. The US import drugs from places such as South America and Asia, and organized crime gangs arrange for them, as well as US made illegal guns (mainly pistols), to be smuggled into Canada.

The biggest single problem is probably an Indian Reservation which straddles the border south of Montreal. It's technically two separate reservations, but the residents have special treatment from both countries which allows them to travel freely between them without passing through customs and immigration. Native organized crime gangs have heavily infiltrated local governance and police (they have their own police forces) on both sides of the border, and smuggling of everything from cigarettes, to drugs, to illegal immigrants is a major industry. Their proximity to Montreal and New York means they have very good transportation links to distribute their goods everywhere. Doing something about it means making coordinated changes to treaty arrangements by both countries with both reservations, but that is a hugely sensitive historic political issue so nobody has done much about it. Trump completely ignored it the last time around, so I doubt he'll do anything this time either.

thames

Re: Bring it on

Canada produces about 3.5 million barrels per day of bitumen from the oil sands, and this number is prior to the new export pipeline to Pacific markets which started up this past spring. Prior to that production was limited by pipeline capacity. Canada also produces conventional crude from the prairies and off shore on the east coast, but two thirds of overall oil production (about 5.4 million barrels per day of all types) is bitumen from the oil sands of northern Alberta.

The bitumen is diluted with lighter oil (diluent) to help it flow through the pipeline. Smaller pipelines flow in the opposite direction to return the diluent from the ends of the pipelines back to the start so it can be reused.

There are also some plants which convert the heavy bitumen into lighter synthetic crude oil, but it's generally just cheaper to modify the receiving oil refineries to be able to use the bitumen as is.

Major US oil refineries on the coast of the Gulf of Mexico were designed to use very heavy oil from Venezuela. At about the same time as the latter's oil industry started circling the drain several decades ago oil sands production technology has progressed in Canada to the point where large scale production was profitable. As Venezuelan production fell, Canadian production replaced it in US markets. These refineries are designed around this particular type of oil and there aren't many alternative sources, so US tariffs against Canada will feed directly into higher prices to consumers in the US.

The new pipeline to the Pacific was built by the federal government specifically to try to diversify oil exports away from the US to reduce the economic and strategic risks of being too dependent on trade with the US. Due to the earth being a sphere, Japan, Korea, and China are reasonably close to BC in terms of shipping across the north Pacific. The first shipment of oil from the new pipeline went to a refinery in China, so the market is there.

thames

Trump 2.0

This is just Trump looking for an excuse to try to use tariffs threats as a negotiating lever again. The previous time around he declared Canada and Mexico to be "threats to US national security" and slapped massive tariffs on imports from the two.

However both responded with tariffs of their own, carefully targeted against the districts and states of politicians whom Trump needed the support of, and Trump was force to cave and and back away with his tail between his legs. I suspect it will go the same way this time but only after extensive damage to all three economies.

Important Republican party members are already saying that they're not going to let Trump do whatever he wants on this. The biggest trade item for all three countries is autos and auto parts. The industry is so closely integrated in all three countries that the US auto industry would collapse if Trump were allowed to go ahead with it. The Chinese would be falling off their chairs laughing at the US self destructing on this.

You would think that Trump would have learned from his previous mistakes, but he's evidently learned nothing and forgot nothing.

And in case anyone imagines that Biden was somehow a paragon, he was just as protectionist as Trump, he was just a lot less stupid and self destructive in going about it.

This is the direction the US are going in regardless of who is in power, and it's why both Canada and Mexico have ongoing efforts to diversify trade away from the US. The US are not the future so far as Canada and Mexico are concerned, and it's things like this which is why.

Datacenters line up for 750MW of Oklo's nuclear-waste-powered small reactors

thames

Re: "At 300 MW, these SMRs are not drastically bigger than Pickering's 500MW reactors "

Yes thank you, it should have been the new Hitachi SMRs are not drastically smaller than the reactors at Pickering.

As for build time, they did site prep work for all 4 units this past autumn. Construction of the nuclear works will start in early 2025, and the first unit is expected to be in commercial operation by 2029. This fairly quick, so the concrete work doesn't appear to be a serious bottleneck.

As for the type of steam turbines used, I don't think anyone really cares. Ontario gets the majority of its electric power from nuclear energy and has for many years, so the steam burbines seem to work just fine as is.

The UK AGR (Advanced Gas-cooler Reactors) were designed around the idea of being able to use high temperature steam turbines and higher thermal efficiencies, but the practical benefits of this were much less than the problems associated with higher temperatures. The AGR design proved to be a technological dead end, as has every other high temperature reactor.

Oklo's reactor is a liquid metal cooled fast neutron reactor. A number of countries have built these types of reactors over the decades and found them too complicated, expensive, and impractical. I haven't seen anything which would lead me to believe that Oklo's design will be any better.

The SMRs being built at Darlington are based on well proven technology. However, Ontario are also planning to build large nuclear reactors as well, again based on existing technology.

thames

Re: "to develop new fuel recycling technologies."

France already recycle nuclear fuel, and other countries do as well .It's called MOX (mixed-oxide plutonium-uranium) fuel, and they use it extensively. This is reactor grade plutonium, which is a different isotope mixture than bomb grade plutonium (which has to be specially made for bombs in specialized reactors).

The reason it isn't done more widely is that currently uranium is so cheap that by most estimates is cheaper to use a once-through fuel cycle and just store the spent fuel until such time as uranium prices rise far enough to make it profitable to recycle the fuel. The French recycle fuel for reasons of energy security, so they don't have to import as much fresh uranium. They estimate that the costs of recycling are about the same as a once-through cycle plus long term storage. Most of the long lived radioactive elements in spent fuel are isotopes of plutonium which of course the French recycle back into fuel and get burned up in the reactor.

Canada has used MOX fuel made from surplus ex-Soviet nuclear weapons. However, this was more expensive to make than standard fuel (Canada uses natural non-enriched uranium fuel, which is cheaper than the enriched uranium which many countries use). It was only done in this case as part of an international agreement to dispose of surplus nuclear weapons left over from the collapse of the Soviet Union.

However, research has been done on recycling spent fuel from PWR reactors (a very common reactor style) and using it in Canadian designed CANDU reactors and derivatives (CANDUs are used in a number of countries). One method developed in South Korea involves simply chopping the spent fuel rods from a PWR into CANDU compatible lengths and welding the ends shut and feeding them into CANDU reactors. The fuel may be spent (used up) from the perspective of a PWR, but to a CANDU which runs on non-enriched uranium, this is high grade fuel.

The other method (developed by Canada) involves crushing the spent oxide fuel pellets from a PWR and blending them and mixing in fresh uranium to get a more consistent fuel mixture before reforming them into pellets and fuel rods and bundles. This is a dry process which is simpler and produces less waste than the conventional chemical reprocessing system used by France to produce their MOX fuel.

However, neither of these processes has been commercialized, again because uranium is currently so cheap and abundant that it's not economic to do so.

There are other fuel recycling methods as well, but they all run into the same issue of there being no market for it.

At this time the only thing that might make it worthwhile, aside from the national security reasons used by France, is if PWR reactor operators paid companies to recycle the fuel in order to take the waste off their hands. Using up the left over plutonium by recycling it gets rid of most of the long lived waste which would otherwise have to be stored.

thames

Hitachi in Canada

Hitachi (they go by various names) already have a customer building one of their SMRs in Canada, just east of Toronto. This is a 300MW reactor, and the plan is to build 3 more units alongside it for a total output of 1,200 MW.

The first reactor is already under construction, and it is scheduled to be connected to the grid in 2027, and delivering electric power within 2 years after that. This is being built next door to an existing nuclear power plant (Darlington) and is being operated by OPG (owned by the province of Ontario), who run a number of other nuclear power plants as well.

The actual fabrication of the reactor is being done by a company in Canada which has many years of experience building major components for nuclear power plants.

Hitachi are one of the shortlisted companies by the UK for SMR deployment in the UK. I understand their proposal is to basically clone the Canadian plant for the UK.

At 300MW this reactor is at the large end of the scale for SMRs, but the economics of very small SMRs is questionable. There are civil works (roads, cooling water handing, grid connections, etc.) which have inherent economies of scale, and these costs can be quite significant for very small reactors. At 300 MW, these SMRs are not drastically bigger than Pickering's 500MW reactors (also just east of Toronto), which date from the early days of nuclear power in Ontario. The "modular" (ability to build standard reactors in a factory and ship already assembled to the site) aspect is far more important to the SMR concept than the "small" aspect.

As for data centre companies building their own nuclear reactor, it would make far more sense for them to do what other companies that need lost of cheap electricity (e.g. aluminum smelters) do, which is to location operations in countries which have an abundant supply of electricity.

Broadcom makes VMware Workstation and Fusion free for everyone

thames

Re: Long Time VirtualBox User

I switched to KVM from VirtualBox mainly because VirtualBox VMs were randomly hanging. The final straw was that after an update VirtualBox wouldn't run at all and it took several weeks for another update to fix that. I haven't had any problems at all with KVM so far. However, I had been using VirtualBox for years and my experience with KVM has been a matter of months so far.

For what I'm using it for (software testing) both offer equivalent functionality. However, the virtual network connections for KVM were much easier to use than was the case for VB.

thames

Re: Long Time VirtualBox User

If you are running it on Linux, then you may wish to have a look at KVM as well. I recently switched from VirtualBox to KVM on Ubuntu 24.04, and have been very happy with it. I have been using it strictly for software testing, so I can't comment with respect to what it would be like to do daily work on it.

I did find that I couldn't import my existing VirtualBox image into KVM qcow2 format. Theoretically it should be possible, but in practice the converted image didn't run. I was able to import an image meant for VMWare into KVM with no problems however.

QNX 8 goes freeware – for non-commercial use

thames

Registration required

Requiring registration is going to kill a lot of casual developer interest right there. Too many people are going to see the need to talk to QNX about a non-commercial license as just being a tool for generating sales leads for QNX salesmen.

If I could just download an image that I could run in a VM or on a Raspberry Pi I might port some of my open source software libraries to it. If I have to jump through hoops, then why bother?

I would think that getting more people to use QNX would be in the company's interest. In terms of generating revenue it's easy for them to find who the big customers are who are shipping QNX in commercial products. It's not like they need to worry about per-CPU server licensing customers in corporate server rooms.

A sit-down with Ubuntu founder Mark 'SABDFL' Shuttleworth

thames

I'm not a patent lawyer, but from a user perspective Gnome 2 was as far from Windows as any GUI that I've ever used, except perhaps Windows 3. I was originally using Mandrake (later Mandriva), and the default install included both KDE and Gnome (it was a big stack of CDs).

KDE was very Windows-like, and I had no difficulties adapting to it.

Gnome 2 on the other hand seemed to be from another planet and I would look at it, find the concepts alien, and then go back to the very Windows-like KDE.

However, Mandrake/Mandriva started going down the tubes (new management brought in by investors), and I had to find an alternative. After evaluating what was out there, I settled on Ubuntu. The default desktop for it was Gnome 2, so I decided to suck it up and get used to it, learning it bit by bit on Mandriva until the day I was ready to make the big jump and install Ubuntu instead.

Gnome 2 was alien, but it was actually pretty good, once I got used to it. I wasn't afraid to try something new, and I felt that Gnome 2 was actually easier to use than Windows in terms of getting things done.

However, Gnome 3 was a different story. Instead of just being an adaptation of Gnome to the new GTK version, they decided to exercise their creativity and strike out in a "bold new direction" in user interface concepts. This was the classic example of group of programmers trying to be innovative UI designers and not really understanding how the average person thought or interacted with a computer. The result was a design which required a lot of mouse movement from one side of the screen to the other to get anything done. Using early Gnome 3 for an extended period of time has been compared to be like sawing wood.

As for Unity, they dropped broad hints which said the UI designer contracted by Canonical (the report was published and free to use by anyone) had in fact been heavily inspired by Apple Mac. This wasn't just a matter of the window control buttons being on the left, but also the function (although not location) of the dock and other things. They didn't say it outright though, as that would invite lawsuits from Apple. I can't give you a source for this, but I did read about it in various articles. I wasn't familiar enough with Apple Mac to make that connection myself.

When the Gnome 3 developers finally realized that they were on course to being a footnote in computer history and put their misguided "creativity" back in a box on the shelf, then it started becoming more like Unity.

When Gnome 3 progressed far enough, Ubuntu was eventually able to ship a version of it with extensions that made it very similar to Unity. I found the transition from Unity to Gnome 3 completely seamless as a result. I still put the window buttons on the left though, as that makes more sense from an ergonomic perspective (most of the other things the mouse needs to do are also on the left).

Two explanations have been given for the dock being on the left side of the window in Unity (and later in Ubuntu flavoured Gnome). According to the pundits it was to avoid lawsuits from Apple. According to Canonical it was because modern monitors were much wider than they were tall, so the Unity UI design contractor said it made more sense to use horizontal screen space than vertical screen space. I suspect the latter is the correct reason. It does make more sense to have it on one side of modern wide monitors.

As for the Windows keyboard UI, that was copied from OS/2 when that was a joint project with IBM, and cane from inside IBM as CUA (Common User Access Guidelines). This was intended to provide a common user experience across IBM's product line, from mainframes (I don't know how that was supposed to work), to minis, to Unix workstations, to PCs. Microsoft kept that when they split off and went in their own direction with Windows.

You can find IBM CUA if you google for it. It was considered to be public spec which was free for anyone to use. It's no surprise that Linux distros implemented it instead being different for the sake of being different.

thames

That's a very nice interview, but I think that Shuttleworth's perspective of what was significant about them comes from his perspective on what he spent time on rather than how it affected people outside of Canonical.

Unity on the other hand only came into existence because early Gnome 3 was absolutely dire and nearly everyone thought they were on a course to crash and burn. Not even the company behind it, Red Hat, were willing to ship it as the default desktop on their distro. It looked like Gnome was destined to die and the only realistic alternative was KDE.

As a company which had placed so many of their bets on having a usable Gnome desktop, Canonical was in a very difficult position and I suspect that Unity was born out of desperation, to have something actually usable for their flagship desktop.

So they hired an actual UI designer who came up with Unity. It was actually pretty nice. Then Gnome started copying Unity (something they will never admit to) and evolved into something actually usable. Once that happened, the reasons for Unity to exist went away and Ubuntu were able to ship a Gnome desktop that gave them all the important bits of Unity.

When Windows Server 2025 is delivered like it's 1999, nobody gets to party

thames

As I understand it the reason why updates work so smoothly on Linux compared to Windows has a lot to do with file system semantics. Linux follows unix file system semantics where old and new versions of the same file can co-exist side by side until the last user of that file releases the file handle and then the old version goes away. So, you write the files to disk and the applications inherently automatically use the new versions when they restart. Long running daemons can be sent a restart signal, but again, that's a long established practice.

Windows file systems don't do that, instead they have locks preventing a new version from being installed until the last user of the old version has released it. Thus you have the update, reboot, update some more, reboot again cycle.

For both operating systems, these assumptions are baked into applications, not just the OS itself. This makes changing Windows to work more like Linux (unix) is probably very, very, difficult and would likely break an unknown number of business applications in the process.

Why we're still waiting for Canonical's immutable Ubuntu Core Desktop

thames

As I understand it, Flatpack isn't really able to a lot of things which Snap can. It was a very narrow solution to portable packaging of GUI apps rather than a general solution for software in general.

Snap evolved out of Ubuntu Phone, and was one of the bits of that project which were later repurposed for other things. Currently it's being used for embedded and cloud applications. What's different about Ubuntu Core as compared to at least some other immutable OS projects is that the actual OS part is broken up into several smaller snaps which can be exchanged in a mix and match fashion rather than being one big chunk.

As stated in the story, what they are trying to figure out now is how to split the desktop up into multiple chunks as well. From the sounds of it they want something that works for KDE and possibly others as well, and not just a Gnome only solution.

thames

Re: I’m not a fan of ‘snap’

I've had zero problems going from 22.04 to 24.04. As for memory usage, top is currently reporting 2131.8 MiB used, including several applications such as Firefox. If you have 128GB of RAM, then the amount being used by Ubuntu and Firefox is barely a rounding error for your PC.

As for Nautlius supposedly not reporting file sizes, it certainly does when I use it. Right click on file, select properties, then hover the mouse over the file size and it reports the size down to the last byte.

As for Nautilus extensions to show file hashes, four different ones show up in the repos if you ask apt. Take your pick, although I have no intention of trying them.

And if the Foxit extension is "obscenely slow", then don't use it. The PDF reader built into Firefox works just fine, use that instead. It's fast, it scrolls smoothly, and it displays everything just fine.

We get the picture. You don't know much about Ubuntu, but decided to hack away at the innards to make it "better", and now it's f*cked. But it's definitely not your fault, so siree, it's the fault of Ubuntu because you don't make mistakes. I don't think that switching distros or even using a different OS altogether is going to solve that sort of problem.

You've got 128GB of RAM. Try installing KVM and then installing Linux in a VM and try out your hackery in there before experimenting on your actual PC.

The US government wants developers to stop using C and C++

thames

Re: Why?

The issue is that to convert a C program to a "proper" Rust program you have to approach a lot of things in a different way. Rust is not simply C with different key words.

If you want an analogy, then if you've done much Python programming you have probably seen Python programs that were either directly translated from C or Java, or written by people familiar with C or Java and are just writing their Python program the same way they write a C or Java program. This is a well known phenomenon in Python and is immediately recognizable by people with experience with the language. The end result is something very verbose and slow because the Python language is being used in ways it was never meant to be in order to make it work like a different language.

With C and Rust, there have been multiple machine translation projects, including AI based, but the results have been very disappointing. You simply end up with a C program that is written in Rust rather than a "proper" Rust program, and you probably haven't made much if any use of the memory safety features which you were converting to Rust for. So why bother?

The answer would seem to come down to either completely replacing all existing C and C++ programs with ones written in other language (like that's ever going to happen), or else address the problem in a different way, one that nobody has really come up with yet.

Canada closes TikTok's offices but leaves using the app a matter of 'personal choice'

thames

Re: ?

The story is not going to make any sense to you if you don't live in Canada and been following the Canadian news. This story doesn't mention any of the background to this announcement but instead goes on about completely unrelated American issues, so there's no reason why you would understand it.

One of the top long running stories over the past couple of years has been about security problems originating in foreign countries. The Government of India have been running around assassinating Modi's political critics in Canada, Saudi Arabia and Iran have been doing their own dirty deeds, and Chinese people have been saying negative things about Canadian politicians on WeChat.

A long running revue committee is just wrapping up their report on all this, and so the government have to be seen to "do something" in order to cut off the press reaction.

Now they've "done something" and so the whole thing can be put behind us and we can get on to the next issue. India will still murder Modi's political critics in Canada, Saudi Arabian security agents will continue to show up at the airport in Toronto with bone saws in their luggage and evasive stories about the reasons for their visit to Canada, and Chinese people will still call Canadian politicians insulting names on WeChat, but we are now safe from, well, something or other.

It's like the Huawei / Meng story from a few years ago. If you didn't follow the Canadian reporting on what actually happened you ended up reading the wildly distorted and inaccurate version circulating in the international press and so would have no idea what was going on or how it all started or ended.

Chinese attackers accessed Canadian government networks – for five years

thames

India

The Indian Intelligence assassination campaign in Canada was based out of their US diplomatic offices. They had intended to kill a number of other opponents of the Indian government in the US as well, but "hit man" they contracted to do the job in the US turned out to in reality be a police agent, so the Americans got the whole set of communications between Indian intelligence and the "hit man" in the US. The hit man India hired in Canada (via their US operations) however turned out to be the genuine article and went through with the job.

The Americans shared their information with Canada after the fact, so there's really not any doubt about what went on. A US based Sikh organization were promoting an unofficial "independence referendum" amongst Sikh expats, and the Canadian Sikh murdered was their Canadian contact for this. The assassination campaign was intended to wipe out the US and Canadian people involved in this.

India's main issues with Canada are there are a very large number of Sikhs in Canada (the largest Sikh population outside of India), and a few of them are involved in promoting the idea of an independent Sikh state carved out of Punjab. The other major issue was the Indian farmer protests were very big in Canada among Indian immigrants (the usual immigrant politics) and apparently Modi felt they were making his government look bad abroad. Modi had demanded that Canada arrest and imprison the protesters, but Canada had refused, protesting being legal in Canada.

Canadian security are busy following up on a number of other murders in which India are believed to be involved. Apparently India hires ethnically Indian based crime gangs to do their dirty work so they can then claim the targets were simply the victims of ordinary crimes (extortion, murder, etc.). This is their standard line when confronted on this. However, the evidence the Americans have and have shared with Canada is pretty iron clad, on top of other Five Eyes sources. From the sounds of it, either the US or the UK have been heavily hacking Indian diplomatic communications and IT infrastructure channels and have a mass of intelligence. The hacking has not been all one way.

India has responded to the US on these issues with profuse apologies and great deference. Their response to Canada on the same issues has been one of belligerence and denial that they had anything to do with it. I suspect the difference in response in this case has to do with the relative size and power of the US versus Canada and Indian and US diplomatic strategy in Asia. American police authorities in New York are apparently filing criminal charges against one or more Indian officials, which may throw a wrench in the works of attempts to sweep this all under the carpet.

People who find all this surprising should realize that India are a rising power and future great power and will act in their own interests as far as they can get away with it. Being a democracy or a fellow Commonwealth member has no bearing on how they act in the world.

As for the Chinese threat, what actually has Canadian MPs concerned isn't traditional espionage, it's people saying nasty things about them on Wechat accounts during election campaigns. That's the thing they want stopped. MPs are much less exercised over any computer nonsense.

Huawei releases data detailing serverless secrets

thames

Re: A quite generous move...

Yes and no. Different clouds from different vendors are different enough that optimizations based on this data may not be directly applicable to a cloud from another vendor.

On the other hand, detailed data of this sort may attract the attention of independent academics who want to use it to do research on coming up with new optimization algorithms which Huawei may be persuaded to test for them.

If the above is the case, then other companies may benefit to some degree, but Huawei may benefit the most.

Ford CEO admits he drives a Chinese electric vehicle and doesn't want to give it up

thames

American auto companies had stopped making lower priced cars and focused on expensive ones for several reasons. One reason was the chip shortage meant that production was limited by parts availability, and so they focused on high end models with high profit margins. The other main reason was that very low interest rates, virtually zero, meant that finance costs were very low.

Both effects have gone away, and auto industry news is talking about the panic in Ford, GM, Stellantis (Chrysler), etc. that they don't have low cost models to sell.

Average selling prices of new and used cars are falling in the US. The analysis of the situation being talked about by auto industry analysts is that the people can afford and want an expensive car already have one, and the ones who either can't afford an expensive, or can afford it but aren't interesting in spending their money on a car, are holding off buying, and when they do buy, they buy the cheapest one they can find. The result is that storage lots are full of expensive cars that can only be unloaded at deep discounts.

General opinion seems to be that the auto makers need to offer low cost cars if they want to stay in the mass market. That means smaller, simpler cars without a lot of the useless crap which drives up the price.

The Chinese are already selling cars like that, in both electric and non-electric models. The danger for the American makers is that Chinese companies will gobble up the global market outside of a protected American bastion, leaving the American industry like the British auto industry of the late 20th century.

The American companies know this, and they also know that the people who dismiss the Chinese auto makers are like the ones who did the same with respect to the Japanese in the 1970s. The head of Ford knows this. The people who don't know it are people who are living in the past, just like 50 years ago with the Japanese.

It's about time Intel, AMD dropped x86 games and turned to the real threat

thames

Re: x86 Train Wreck

The compiler won't in most cases use a lot of these optional features because there is often no way of expressing them in a generic portable high level language such as C. Your options are generally either assembly language or compiler extensions (which allow you to specify assembly language instructions in the middle of a C program).

In some cases the compiler will automatically use some features if present, but generally only for very simple cases. In many cases you need to actually use a different algorithm to be able to use the instructions effectively. That isn't something that compilers are good at figuring out for themselves. You could easily need to use one algorithm for floating point, another algorithm for unsigned integer, a third algorithm for signed integer, and a fourth algorithm for non-optimized generic C for fall-back mode. Been there, done that.

I have an open source C project where I support Linux, Windows, and BSD. Windows users get a pre-compiled binary without optimizations because many Windows users don't have a compiler installed and asking them to install one is asking a bit too much.

Linux and BSD users get a source distribution only because unlike Windows installing a compiler there is very easy. The Linux build script looks at the CPU flags to see if it has the optimized features. If so, it compiles for these (there are a bunch of if-defs in the code). If the flags are not present, it uses generic fall-back mode.

BSD users also get the generic fall back mode because while Clang LLVM is reasonably GCC compatible, one of its weak areas is poor support for compiler extensions. They try to make up for it with automatically using the instructions if present. This is good when it works, but usually it doesn't because as I said, you actually need to write a different algorithm in many cases, one that can't even be expressed in standard C (which is a lowest common denominator language, for the sake of portability).

Now take all of these different versions on different platforms with different compilers, and realize that each of these has to be not just written and maintained, but also tested. You need to have a very thorough automated testing system for each of these combinations.

ARM has been a different story. I have 32 and 64 bit versions, but as an application (as opposed to OS) writer, ARM has been a pleasure to work with compared to x86. OS initialization has been more difficult on ARM (due to lack of boot standards on different boards) but that isn't something that has affected me as an application level developer.

thames

x86 Train Wreck

This is something Intel and AMD should have started doing 10 years ago. The architecture is an absolute train wreck of optional features.

To give an example, each CPU has a list of feature flags that your application should supposedly look at to see if it has that feature before deciding whether or not it can use it. I just had a look at the CPU that I'm writing this post on, and there are 174 of them. 174.

I am trying to imagine a world in which any software developer is going to examine each PC it is running on and decide which of 174 different alternate implementations to use. Realistically, you are going to look at most for one or two features that could make a big difference and ignore the rest, which may as well not be there.

You are also going to ignore new features until they have been around long enough that the hardware that supports them is the vast majority of the installed base, or at least the majority for whatever your market is.

If you are using say SIMD, that means targeting SSE4.2 and providing a non-SIMD fall-back. AVX was actually slower than SSE4.2 on at least some CPUs, (good luck figuring out which ones), AVX512 is still on only a small share of hardware, and Intel have focused on slicing the x86 market into ever thinner segments of inconsistent feature sets in order to try to get the maximum revenue from each segment.

The idea that x86 offers great backwards compatibility doesn't stand up to scrutiny. For example, I have an old, functioning, PC that won't boot a modern mainstream 32 bit Linux distro (of the few 32 bit ones that remain) because it doesn't have the "popcnt" instruction. For those not familiar with it, popcnt counts the number of bits set in a word. Even software that is as widely used as the Linux kernel cannot be bothered to work around all the myriad different x86 feature sets, they just pick a few and go with it and if there is hardware still out there that doesn't work with, then too bad.

The question today is whether Intel and AMD have left things too late. Other architectures without all that baggage may be able to make better use of their available space to offer better performance, lower power consumption, lower price, or some combination of the three to make life difficult for Intel and AMD.

Tech giants set to pay through the nose for nuclear power that's still years away

thames

Re: Is it genuine?

The point of making the reactors smaller is to be able to assemble as much as possible in the factory rather than on site rather than having really large bits that have to be assembled on site. If the Rolls Royce SMR is transportable in assembled form then the larger generating capacity may be an advantage in terms of cost per MW.

There are significant economies of scale when it comes to the civil works - roads, sewers, transformers, switch yards, transmission lines, cooling water inlet and outlet channels, etc., which make larger plants more economic. This is in addition to being able to make better use of manpower when a number of units are co-located. Scattering small plants all over is very sub-optimal.

The really small SMRs seem to be a lot less successful in terms of actually getting bought by customers than the larger ones.

thames

Re: Submarines and ships

Rolls Royce build reactors for the UK (and soon for Australia) submarines, and they are promoting an SMR design. They hope to sell this to UK buyers, and are currently on the government's short list of vendors.

thames

Re: Is it genuine?

BWXT's (formerly Babcock and Wilcox) Canadian subsidiary are working as a subcontractor to build components for Hitachi for the latter's SMR. Hitachi's first 300MW SMR is currently under construction near Toronto (next to the existing Darlington nuclear power plant) and is scheduled to start generating in 2029. An additional three 300 MW units are planned to go next to it, for a total of 1,200 MW.

Saskatchewan plan to build the same design in the mid-2030s, and there are plans for sales in Europe, including Sweden, Poland, and Estonia in the same time frame.

BWXT Canada seems to be a partner with Hitachi in all this. I don't know if the American branch of BWXT are involved in SMRs in any way.

As for Rolls Royce, they are currently pursuing sales for their 470 MW SMR, but government decision making in the UK is glacial, taking longer to decide on what SMR to favour than Canada is taking to think about the idea and then actually build one. Rolls Royce may therefore decide to give up on the market.

The UK plans to make a decision by the end of 2029 (the same year the Canadian plant starts up). They are currently looking at proposals from Hitachi (a clone of the Canadian plant), Holtec, NuScale Power, Rolls-Royce SMR and Westinghouse. Of those, the only one that actually has any under construction is Hitachi (in Canada).

China’s infosec leads accuse Intel of NSA backdoor, cite chip security flaws

thames

Re: Regardless of the veracity of the allegations...

I think the focus is more on RISC-V and Loongson rather than x86 clones. The only reason to use x86 is either for Windows compatibility or to use commodity hardware designed to run. Windows is just as problematic or even more so when it comes to US back doors so plans are to phase it out as well (this is already well under way).

Current policy in China is to phase out both x86 and Windows in all government and critical infrastructure use and replace them with RISC-V or Loonson and Linux. This is for a combination of security and to simply get away from all the massive delays and pointless paperwork and expense associated with dealing with American companies and their US sanctions compliance bureaucracy. Private industry and the consumer market are expected to follow once the economies of scale that comes from supplying the government market ramp up.

Europe (EU) and India are going the same way, although they aren't as far along. With both of these it's driven by a desire to get away from increasing US control and also to help develop their own technology industries. There has long been an entire industry in Europe based around making and selling high tech kit that is certified free of US content in order to avoid dealing with US bureaucracy on re-export of goods which would otherwise include US content as one or more of its components.

Google hopes to spark chain reaction with nuclear energy investment

thames

Re: Nuclear power for everyone?

I'm not sure what you mean by "Canada threatening to shut down their CANDU reactors". Existing reactors are undergoing a program of refurbishments, a few at a time. There are no plan to shut them down permanently though, indeed they are building more. They produce the majority of electric power in the province of Ontario (40% of the total population in Canada).

Currently cobalt 60 (half the world's supply) is made at the Pickering and Bruce B plants. There are plans to add production at Darlington to accommodate refurbishment programs at the other plants. They will also add production of molybdenum 99 while they are at it.

I think they just shove a dummy fuel bundle containing the source cobalt or molybdenum through one of the fuel channels and catch it when it comes out the other end (the reactors are refuelled continually by shoving fuel bundles through the reactor, they don't shut down for refuelling). I suspect the main thing they need to do is to add special handling systems for the irradiated medical isotopes when they come out.

Overall it's a nice little earner, and it's done as part of the normal operation of a generating plant instead of building a dedicated reactor to do this. The design of the reactor makes it fairly straightforward to do.

thames

Re: Nuclear power for everyone?

Take uranium reserve and resource estimates with a very large grain of salt. Currently uranium is so cheap and abundant that not a lot of exploration for it is taking place outside of existing producing areas. Very high grade deposits in Canada and Kazakhstan currently supply much of the market, so lower grade deposits elsewhere can't compete in the market.

Secondly, uranium is so cheap that it hasn't been economically worth while using it efficiently. Instead it's being mainly used in a once through fuel cycle and the used fuel then stored until such time as uranium prices rise enough to make recycling it profitable.

When used fuel comes out of a light water reactor, it still has most of its energy in place. Recycling (or reprocessing) involves separating out the plutonium and the uranium from the other accumulated elements, and remixing them into mixed oxide (MOX) fuel. Nearly all of the radioactive elements in spent fuel are either plutonium or left over U235, both of which can be incorporated into MOX fuel. The stuff that can't be reused is a collection of minor elements that would either absorb neutrons or otherwise be undesirable to have in fuel.

France currently use MOX fuel as 30 per cent of their fuel load in their light water reactors. It's a policy decision on their part to make themselves more energy self sufficient, as a once through fuel cycle would be cheaper.

Plutonium comes in two broad types, reactor grade and bomb grade. The difference is the mixture of isotopes. Just like uranium, plutonium has several isotopes. The isotope used for bombs is Pu 239. Weapons grade plutonium has to be specially made by cycling fuel through a reactor very quickly to minimize the build up of isotopes other than Pu 239 and is normally made in reactors which are specially designed for this. UK MAGNOX reactors were military reactors which produced electric power as a byproduct.

Reactor grade plutonium is what comes out of a normal power reactor. It is considered to be infeasible to separate out the Pu 239 from the other isotopes in spent civil fuel.

In a normal light water reactor about a third of the energy produced comes from plutonium which was created in the reaction and then "burned up" as part of the normal fuel cycle. In CANDU reactors (natural uranium reactors originating in Canada but also used around the world) two thirds of the energy comes from plutonium.

As for thorium, using it does not require exotic reactor designs. Existing CANDU designs can use thorium with minor modifications. India's existing CANDU derivatives currently use thorium as part of their normal fuel load as they say its characteristics gives certain technical operating advantages.

The reason that thorium is not currently used as the primary fuel in reactors is primarily that you need a reactor that makes efficient use of fuel (such as CANDU and derivatives), and because thorium is not a direct substitute for uranium. You need to use it as part of MOX fuel, with some plutonium (or medium enriched uranium) in the mixture to kick off the reaction, and then using uranium 233 transmuted from thorium to sustain it.

Thorium therefore faces the same problem that uranium based MOX fuel faces, which is that uranium is currently so cheap that there is no economic case for using thorium. If uranium prices rise enough then thorium might become competitive. Until then however, why bother?

The reason that molten salt reactors get talked about in conjunction with thorium is simply because there are certain designs which may have the potential to produce more U233 than they consume, resulting in them effectively being a sort of breeder reactor. Thorium cycles in CANDU reactors on the other hand require a continual input of small quantities of either plutonium or enriched uranium to start the reaction.

As for molten salt reactors in general, I'm not a fan. They sound unnecessarily complex and I'm not convinced they offer any real practical advantages.

China again claims Volt Typhoon cyber-attack crew was invented by the US to discredit it

thames

Denials by anyone are not really very plausible. We know with certainty that the US engage in cyber attacks on a massive scale (including on supposed "allies"), it would be very surprising therefore if the Chinese were not equally active. Even if "Volt Typhoon" is fiction, Chinese intelligence must be doing something, because that's what big powers do.

It's not just those two either, recent inquiries in Canada have found that countries like India and Saudi Arabia have very "robust" (extending all the way to assassination) operations in Western countries such as Canada, and may be a greater threat in that respect than China or Russia (or the US). The assassination of journalist Jamal Khashoggi in Istanbul was preceded by Saudi cyber attacks on his associates in Canada to gain knowledge of his plans and movements. Indian intelligence have been engaged in a campaign of political murder, coercion, and extortion in Canada, and cyber attacks originating in India are accompanying that.

For the vast majority of the world who don't live in one of these countries, you're going to get hacked by all sides. There are no "friends" when it comes to things like this. Your government aren't going to do anything either because they can't afford to alienate major powers or because those countries are simply going to ignore their complaints anyway.

The only realistic approach for you as a non-spy is to assume that you are going to be a target of everyone and to take security seriously. I won't bother detailing what needs to be done with respect to security, people know what needs to be done, they just stick their heads in the sand and find excuses not to do it.

Switching customers from Linux to BSD because boring is good

thames

Re: Seems to work for the basics

It might help to actually read a post before you hit the reply button. The standard shell for /bin/sh in Ubuntu and Debian is Dash - Debian Almquist Shell, which is a POSIX standard shell.

The standard interactive shell is Bash (/bin/bash). If your shell script just asks for /bin/sh you get Dash, not Bash. If you open a terminal to interact with or your script asks for /bin/bash you get Bash. The majority of Linux distros are Debian derivatives, so most of them do the same. I can't speak for Red Hat or Suse derivatives on this.

However if you google for answers to questions about shell scripting nearly all the answers you get will be for people assuming you are using Bash, even if they do not explicitly say so. Therefore if you decide to use pretty much any BSD you need to know your shell scripting well enough to distinguish between standard shell syntax and Bash extensions.

The Bash extensions are very nice to have, but they are Bash specific and don't work in Dash, let alone any of the BSD shells. This is something that anyone trying out BSD needs to be aware of. Therefore don't copy-paste shell scripts from the Internet without having a good look over them for Bash specific extensions.

thames

Re: Seems to work for the basics

The installation stops at the point where it asks whether to install, upgrade, autoinstall, or shell. I have just now tried this with OpenBSD 7.6 (the install screen is still lookng at me). It appears to simply not be accepting any keyboard input. This would seem to imply that I can't fiddle with the boot options from the console either.

I'm using Virtual Machine Manager 4.1.0 on Ubuntu 24.04, installing from the ISO (install76.iso amd64), and accepted the standard defaults.

I have just recently started using KVM. I have a new automated testing system and used the opportunity to switch from Virtualbox, having used the latter for many years (because the testing scripts were designed to work with its peculiarities.

OpenBSD would not install in VirtualBox from version 7.3 onwards. I can't recall the details, but my note that I have with the ISOs simply says that it would not install. However, version 7.2 would install, so I was able to simply install 7.2 and sys upgrade from there without any problems. However that option doesn't work with KVM.

If you have any suggestions I would be happy to hear about them. I plan on working on this problem today and If I have any success today I will make another post on this thread.

thames

Seems to work for the basics

I have been using FreeBSD and OpenBSD as test targets along with all the major Linux distros and Windows for quite a few years now running in VMs on an automated testing system. At first I used Virtualbox, but have lately switched over to KVM. The test method involved starting the VM, loading the packages over SSH, compiling the source, running the unit tests and benchmarks, and then downloading the results. I have this all automated.

For the most part running FreeBSD was more or less like running Linux, with two exceptions. One was the standard shell wasn't Bash, and so didn't have Bash extensions. This meant that any shell scripts which needed to run on the target (managing installation and tests and things like that) needed to not have any Bash extensions in them. If you google for answers about how to do any shell scripting, virtually all the answers you get assume that you are using Bash. This means that you need to really know your shell scripting quite well in order to stick to the older (and less convenient) syntax supported in BSD. It's doable, but it's something that you need to take the extra effort over.

The other major difference was the C compiler. Straightforward vanilla C does not support many modern CPU features, which means that you either need to write those bits in assembly language or use extensions (or not use those CPU features). For the most part LLVM supports GCC extensions, but in some places that support is still lacking. I write fallback vanilla code anyway, so it does compile with LLVM, but the program does take a significant performance hit in doing so. This will only affect some types of applications, so you need to benchmark this yourself to see if it affects you.

Even when compiled without extensions in both cases, LLVM used to produce drastically slower code than GCC (and slower than Microsoft C). However, over the years LLVM performance has gradually improved to the point where it's now not far behind GCC and the Microsoft compiler now produces the slowest code of the three on average. What I have learned though is that different compilers have different strengths and weaknesses, so you need to benchmark your own specific application rather than depending on generalizations (I'm mainly doing numeric computations).

Using OpenBSD was a lot like using FreeBSD, but starting with version 7.3 it would no longer install in Virtualbox. Instead I had to install 7.2 and upgrade from there. After I switched to KVM I can't get it to install regardless of what version I try. Version 7.6 just came out so I will be giving it a try, but I'm not holding out a lot of hope. Apparently this is a very well known problem and there are various blog posts around the Internet claiming to offer various devious ways of hacking around it. If I get some time I will try some of them, but none look simple. If you plan on running BSD in a VM, you may need to try various types to find one that works for you.

The main reason that I use BSD is to be able to do testing on a different platform and compiler while still remaining similar enough to Linux to make the additional effort minimal. Testing in different environments (and with different compilers) helps turn up bugs that would otherwise perhaps not surface in a testing environment. When I first starting adding different environments it turned up all sorts of bugs which I would not otherwise have found. This was a fairly common practice in the late 20th century - it's not that one compiler is "better" than another at finding bugs. It's just sufficient that they are different. If you are writing software for Linux and do any automated testing it's worth adding at least one of the BSDs to your list of testing VMs even if you don't foresee much of a market there in terms of number of users.

After 27 years, Tcl/Tk 9 finally arrives with 64-bit power and Zip file magic

thames

Re: Also in Python

Here's the overview of how Tkinter works, from the Python documentation:

https://docs.python.org/3.12/library/tkinter.html

When your Python application uses a class in Tkinter, e.g., to create a widget, the tkinter module first assembles a Tcl/Tk command string. It passes that Tcl command string to an internal _tkinter binary module, which then calls the Tcl interpreter to evaluate it. The Tcl interpreter will then call into the Tk and/or Ttk packages, which will in turn make calls to Xlib, Cocoa, or GDI.

It appears to me that there must be a Tcl interpreter embedded in the Tkinter package. You as a Python programmer don't see any Tcl, but it's still there underneath.

thames

Also in Python

The "standard" GUI for Python is Tcl/Tk. It's part of the standard Python library, although many Linux distros make it an optional package.

Here's a quote from the Python documentation about it.

Tk/Tcl has long been an integral part of Python. It provides a robust and platform independent windowing toolkit, that is available to Python programmers using the tkinter package, and its extension, the tkinter.tix and the tkinter.ttk modules.

The tkinter package is a thin object-oriented layer on top of Tcl/Tk. To use tkinter, you don’t need to write Tcl code, but you will need to consult the Tk documentation, and occasionally the Tcl documentation. tkinter is a set of wrappers that implement the Tk widgets as Python classes.

I've used it when I needed a cross-platform GUI. It worked just fine for the job in question.

While HashiCorp plays license roulette, Virter rolls out to rescue FOSS VM testing

thames

Re: VMM

You are correct in that snapshot management is there in KVM VMM on the console window (the window that opens you click the console button).

If you right-click on the snapshot number you can start or delete that snapshot. I had not noticed that I could bring up a context menu using right-click, I had thought I could only see them listed.

However, I don't see a way to create a snapshot using VMM. If it's there, it's not obvious to me. If I can't create a snapshot using VMM, then I don't see the point of deleting them using VMM, at least not for my use cases. Creating and deleting snapshots via the command line is not difficult however.

Overall I am still happier with KVM than I was with Virtual Box, and I find that VMM does pretty much everything that I want from a GUI tool.

I will probably write a bash script with a Zenity GUI to handle any routine VM chores involving snapshots (I need new snapshots after doing updates). I had been thinking of doing the same for VirtualBox, as even if the GUI can do it, it still needs multiple steps and it's possible to make mistakes in the process. Automating it via scripts is the way to avoid that.

thames

VMM

I used Virtual Box for years, but have recently switched to KVM. I used both for automated testing of software.

Both have GUIs for creating and managing the VMs, and both have command line tools which can do everything the GUI can plus a great deal more. I generally use the GUI to setup the VMs, do some initial testing of the VMs, and to debug my automated testing set up. I use the command line tools to script all of the actual software testing.

As to how they compare, I found both to be roughly equivalent for what I use them for, although KVM seems to be better maintained as a project than VB has been lately (things would stop working after an update and Oracle would take a long time to issue a fix).

I don't know the name of the VB GUI, but the one for KVM is called Virtual Machine Manager (VMM). I use only a fraction of the features in the VB GUI, but everything that I wanted from the GUI was there.

KVM VMM has fewer features, which is a plus and minus. From a plus perspective it means that I was able to figure it out fairly quickly, as the number of options is much more limited. On the minus side is that VMM doesn't create or delete snapshots in the GUI itself, you have to do that from the command line. However, that has not turned out to be a big deal so far.

With VB you can give snapshots a meaningful name, while with KVM they're simply a random integer. While the meaningful names were nice to have in VB, so far the lack of them hasn't been a big deal with KVM.

The story mentions networking. My test scripts SSH into each VM as it is running to run the tests on the software under development. The default way that VB does networking (localhost plus port) results in some rather unusual address formats and a bit of going over the ssh man pages to figure out how to use it. It also means that the address format for VMs is different from the address format for actual hardware over the network.

With KVM I installed the optional "libnss" and found that addressing VMs is done by default in the normal way the same as for actual hardware.

It may be possible that VB can be made to work the same as KVM in this respect, but you need to be a networking expert to stray away from the defaults for either as neither one documents this area very well. They just assume that you know lots about network stacks and can figure it out for yourself. When I started using VirtualBox however I was too busy to spend the time to deviate from the defaults, and when I got things working it was less painful to just leave it as is than to change everything. With KVM I just installed libnss and everything just worked the way it ought to.

I test using all the major Linux distros, BSD, and Windows, so I have plenty of experience installing them in VMs. I've looked at both Vagrant and Virter, and didn't really see any advantage to using them. Things may be different if I were responsible for setting up and managing VMs for large development teams for multiple projects. You really have to dig into how you would use them in your particular application to see if they really offer any advantage.

As for vmshed, so far as I can see it's meant for testing the VMs themselves, as opposed to just using the VMs to test software applications. If you are managing a project like Virter then you probably need vmshed. I don't think the way it works however is suitable for normal application testing. So far I haven't seen anything useful for running application unit tests on a combination of VMs and actual hardware that is suited to being hosted on a regular desktop for use by a single developer or small team (as opposed to cloud services). I have my own custom solution that works in my use case.

Torvalds weighs in on 'nasty' Rust vs C for Linux debate

thames

Re: My understanding...

This is pretty much my understanding of it as well. Rust is not simply C with some stuff added, so code has to be written to interface between the two. Every time some C code is changed, Rust code which uses it may change, even if the function call interface is the same. The question is who is made to be responsible for those changes.

There was some other large C project which had some Rust dramatics a while ago where the Rust enthusiast had promised that his Rust bit of the project wouldn't affect anyone else. He was allowed to introduce Rust on those terms. However, he found the maintenance of his Rust code a major burden and so was trying to push that work off onto everyone else, under the guise of "he just needed them to provide information", when in reality he wanted them to maintain the Rust code so he could go on to write more new Rust code instead of spending all his time chasing a moving target with Rust. I think he wrote a small program to automatically re-write the Rust interfaces, but he wanted everyone else to be responsible for writing the necessary inputs to it, which meant they needed to do a lot of extra work when the conditions for introducing Rust had been otherwise.

There was massive push-back from everyone else, and the Rust enthusiast ended up bailing out of the entire project in a huff while blaming everyone else. Unfortunately I can't recall the name of the project at this time, although I think it was covered in El Reg.

One of the major problem with trying to use Rust in a C program is apparently that while Rust does offer interface options for working with C, you lose most of the supposed advantages of Rust if you make use of them, including many of the memory safety features. And once you do that, why bother with Rust?

I suspect that what is really needed is rather than having a completely new language is for someone to create a new variant of C, but without a lot of the baggage associated with C++ and just focuses on the major security pain points in C as experienced in the Linux kernel. This would allow for a gradual re-write of the kernel without the dislocation of using a completely new language.

After the BitKeeper fiasco Torvalds went away for a while and came back with Git. Perhaps he could do the same with C.

Desktop hypervisors are like buses: None for ages, then four at once

thames

Re: Hyper-V Type1

KVM can be type 1 or type 2, depending on how you use it. Type 1 is basically a hypervirsor with no underlying OS, while type 2 runs as an application managed by an OS. Type 1 hypervisors may not need a separate OS to run them, but that's because they have all the features of an OS built in already. What you are using it for and how you are using it determines which type you need.

KVM is a Linux kernel module, so if used in a minimal installation it can be seen as type 1, but if you install it on your regular Debian (or whatever) desktop OS, then it could be seen as type 2.

In reality its both and neither, it's a hypervisor built by people who didn't care what the marketing people wanted to label it as.

As mentioned in another post, I am in the midst of switching from using VirtualBox to using KVM as a desktop hypervisor, and have found the two to be more or less equivalent for my purposes (software testing). Because KVM is part of the Linux kernel (it's a kernel module), I expect that it will be much better maintained than VirtualBox has been under Oracle's care.

thames

Re: Hyper-V Type1

These are "desktop hypervisors", just as it says in the title. This fills a different use case than a server.

They're often used by for example software developers who want to test on different operating systems or OS versions without setting up separate hardware. You would fire up the VM, run your tests, and shut it down again. Most of the time you don't have a VM running at all.

Another use case is where you have one Windows program that you need to run occasionally on say Linux. In the old days you would dual-boot. These days you would run it in a VM, as and when you needed it.

In these use cases the minor efficiency differences between different hypervisor types is irrelevant in practical terms. Furthermore, modern hypervisors provide special virtualisation drivers which simply pass through I/O requests instead of emulating hardware, so there's very little efficiency lost there.

thames

I'm in the middle of replacing a Virtual Box set up for software testing on a desktop with KVM, and so far everything has been going well. All of the functionality that I used in Virtual Box is present in KVM, and KVM has some features that I am digging into to see if they make life easier than was the case with Virtual Box.

I had been procrastinating on this project for a while, but VB had become too unreliable (e.g. would stop working after an OS update and it would take a long time for Oracle to come out with a fix) so I finally found the motivation to do it. So far it has been going very smoothly and I'm kicking myself for not doing it sooner.

For software testing I have an automated system which starts the VM, uploads the software via SSH, compiles it, runs the tests and benchmarks, downloads the results, checks the results, and then goes on to the next OS target.

For this you need a command line interface to the VM so you can script the whole process. With VirtualBox this is "VBoxManage", with KVM this is "virsh". Everything that I used in VBoxManage has an equivalent in virsh that works more or less the same way. So "VBoxManage startvm <vmname>" and "virsh start <vmname>" do the same thing, etc. Some of the commands may be a bit different (I can't recall exactly), but not to a degree which would make any real difference in usability.

I don't have experience doing this with VMWare Workstation, so I'm a little surprised to read that they have apparently just introduced "vmcli". I was pretty sure they had something called vmrun which did the same thing, or was that something else, or did it just have limited functionality?

China claims Starlink signals can reveal stealth aircraft – and what that really means

thames

Re: Starlink? What about starlight?

This was an experiment to show that Starlink satellites in particular could potentially be used as sources of ambient radio signals for passive radar. If there are also other signal sources which could also be used, then that would have to be a different experiment.

thames

Re: I'm skeptical

Over-the-horizon systems like Jindalee are high frequency by OTHR standards, but they're still very low frequency by conventional radar standards. Since low frequency conventional radar can detect stealth aircraft just fine, it's theoretically possible that Jindalee can as well, although I haven't read anything which states that specifically.

Present day stealth doesn't work when the radar wavelength is a significant fraction of the size of the entire aircraft, as the whole aircraft becomes the reflector rather than individual parts of it. This is why supposedly "obsolete" radar systems which some countries which couldn't afford new ones have can detect stealth aircraft.

However, antennas get bigger as frequencies get lower, and you can't fit an antenna for such a radar into a missile to use it for conventional semi-active or active homing (where the missile uses the radar reflections directly to home in on the target). This is what is meant when people say they "can't be used for targeting".

However, military radars are not all used for targeting anyway. There are warning, search, and targeting radar systems, and they tend to operate on different frequencies and have different roles in the overall process.

The Jindalee system cannot be used for targeting against any aircraft, stealth or non-stealth, but that doesn't mean that it hasn't been something very useful for Australia to have in order to see whether something is coming their way from the northern direction.

Canada uses OTH radar as well, pointing off the east coast and has been developing one which can work in Arctic conditions. In the latter case interference from the Aurora Borealis has until now prevented OTH radar from working in the Arctic to detect Russian bombers coming over the north pole, so Canada operates a chain of conventional radar stations in the Arctic for that purpose (the North Warning System), and they are coming up as due for replacement. However, Arctic OTH is now apparently solvable with enough computer processing power and so an Arctic OTH radar is close to deployment.

The issue with using passive radar based on ambient radio signals has face the similar problem of requiring massive amounts of signal processing in order to make it work. This sort of signal processing is however now practical according to various reports. Lots of people are doing research into this area, and what these Chinese scientists have done is to show that Starlink satellite signal may be viable as one such source of ambient radio.

thames

Re: Cell phone tower signals

This was a US B2 bomber which was tracked by a British Rapier surface to air missile system as it flew over Farnborough Air Show. They detected and tracked it using their infra-red sensors built into the launch system. Because the missile was command guided it didn't need a radar return signal to home in if it had been launched.

In the version of the story that I read about the Rapier vendor (BAe originally, now MBDA) recorded the tracking incident and were playing it on continuous loop at their sales booth until the US made a big fuss with the air show hosts and had the latter make them stop.

Again, what I heard was that they "cheated" a bit because they knew where to look in the first place and so could point the missile launcher (which had the sensor system built in) in the right direction to pick it up.

And this is why a radar system that can't be used to provide terminal homing signals is still useful. If you can tell that something is there and give a rough location, you can then start working on the problem of find tuning the location using other sensors that have a much narrower field of view.

thames

Re: I'm skeptical

As mentioned by others, stealth mainly works by minimizing reflections back to the source, not by absorbing the signals. The angles are calculated to reflect the radar "pings" off in another direction.

Newer submarines such as the German 212 CD are starting to do the same thing for sonar. They have long used acoustic tiles to try to absorb signals, but this is much less effective than reflecting the sonar pings away in another direction (the new submarines of course will use both methods). These newer submarines use a diamond shaped outer hull instead of a cylindrical one conforming to the shape of the pressure hull.

The idea of using separate transmitters and receivers for radar in order to detect stealth planes is not new. This is called bistatic radar (an old idea which has become new again) and has been known about for years and has been tested with things like television and cell phone tower signals. They look for the "holes" in radio signals rather than looking for reflections.

What these Chinese scientists have done is shown a proof of concept that it can be done using Starlink signals. Other satellite constellations of course could be used as well.

I have to disagree with the author of the Reg story however with regards to whether this is useful. If you know that something is there and have a rough location and direction of a target, you can start to bring assets in such as fighters or drones to pin point the location and use other shorter ranged detection means for terminal homing. The big problem has been knowing whether there is anything there to find, and this area of research (and other people are working on the problem as well) tries to address that.

It's like the situation with over the horizon radar. It can't be used for targeting either, but that doesn't mean that it's not extremely useful. This is why many billions of dollars are still being spent on it today.

One other advantage of using satellite signals (or other bistatic radar systems using ambient signals) is that they are much harder to knock out because they don't transmit, they just receive. American air warfare doctrine for example places heavy emphasis on suppressing enemy air defences as the first step in any war, and a big part of this is finding and destroying all radar transmitters when they turn on. Using ambient radio transmissions for air warning throws a wrench into the works in this regards. This is the aspect of bistatic ambient radar that really has people interested.

Page: