The UK government, with its reversals on climate policy and commitment to oil drilling and air pollution, usually seems to be pro-apocalypse. But lately, senior British politicians have been on a save-the-world tour. Prime minister Rishi Sunak, his ministers, and diplomats have been briefing their international counterparts about the existential dangers of runaway artificial superintelligence, which, they warn, could engineer bioweapons, empower autocrats, undermine democracy, and threaten the financial system. âI do not believe we can hold back the tide,â deputy prime minister Oliver Dowden told the United Nations in late September.
Dowdenâs doomerism is supposed to drum up support for the UK governmentâs global summit on AI governance, scheduled for November 1 and 2. The event is being billed as the moment that the tide turns on the specter of killer AI, a chance to start building international consensus toward mitigating that risk. The summit is an important event for Sunak, who has trumpeted his desire to turn the UK into ânot just the intellectual home, but the geographical home of global AI safety regulation,â along with broader plans to create a ânew Silicon Valleyâ and a âtechnology superpower.â But just over a week before it begins, the summit looks set to be simultaneously doom-laden and underwhelming. Two sources with direct knowledge of the proposed content of discussions say that its flagship initiative will be a voluntary global register of large AI modelsâan essentially toothless initiative. Its ability to capture the full range of leading global AI projects would depend on the good will of large US and Chinese tech companies, which donât generally see eye to eye.
How is the rest of the summit shaping up? Sources close to negotiations say that the US government is annoyed that the UK has invited Chinese officials (and so are some members of the UKâs ruling Conservative Party). The attendee list hasnât been released, but leading companies and investors in the UKâs domestic AI sector are angry that theyâve not been invited, cutting them out of discussions about the future of their industry. And they and other AI experts say that the governmentâs focus on the fringe concern of AI-driven cataclysm means the event will ignore the more immediate real-world risks of the technologyâand all of its potential upsides.
âI donât know what the UK is bringing to the table in all this,â says Keegan McBride, lecturer in AI, government, and policy at Oxford Universityâs Internet Institute. âTheyâre so narrow in their focus.â He and others in the British AI scene argue it would be better for the government to instead look at how it can help British AI companies compete at a moment of rapid change and huge investment in AI.
The summit agenda says that it will cover two types of AI: that which has narrow, but potentially dangerous capabilitiesâsuch as models that could be used to develop bioweaponsâand âfrontier AI,â a somewhat nebulous concept that the UK is defining as huge, multipurpose artificial intelligence that matches or exceeds the power of large language models like the one behind OpenAIâs ChatGPT. That filter automatically narrows the list of attendees. âOnly a handful of companies are doing this,â says McBride. âThey're almost all American or Chinese, and the infrastructure that you need to train these sorts of models are basically all owned by American companies like Amazon or Google or Microsoft.â
WIRED spoke to more than a dozen British AI experts and executives. None had been invited to the summit. The only representative of the UK's AI industry known to be attending is Google DeepMind, which was founded in London but acquired by the search giant in 2014. Thatâs causing a lot of frustration.
âA lot of modern day AI was developed in the UK,â says Sachin Dev Duggal, at Builder AI, an AI-powered app development startup based in London. âOn one hand weâll say weâre the AI center of the world, but on the other weâre saying we donât want to trust our own CEOs and entrepreneurs or researchers to have a more prevalent voice. It doesnât make sense.â
Much of the worldâs cloud computing and social media infrastructure is owned by US companies, which already puts UK companiesâand British regulatorsâat a disadvantage, Duggal says. If industry-shaping deals get done without domestic businesses having any input, the next generation of tech could also end up being concentrated in the hands of a few huge US companies. âThereâs a group of us that are pretty concerned,â he says.
Duggalâs view was shared by others in the UK's AI industry, who complain that the obsession with frontier models misses âeverything behind that frontier,â as one executive at a unicorn AI startup, speaking anonymously because they still hope for a summit invite, says. That includes every startup, every academic team developing their own AI, and every application of the technology thatâs currently possible, the executive says. The frontier focus also excludes open source language models, the best of which are seen as slightly behind the best available, but can be downloaded and usedâor misusedâby anyone.
The UK government has promised to invest more than $1 billion in AI-related initiatives, including funding to develop the local semiconductor industry, a new supercomputer in Bristol to support AI research, and various task forces and promotion bodies. How much theyâll help remains to be seenâcritics point out that in global terms, itâs not a great deal of money. Powering up both chip and AI industries, while starting well behind the leaders in the US and Asia, with a single billion dollars will be challenging. And the funding is not necessarily flowing into British companies. In May, the CEO of Graphcore, a Bristol-based startup that makes specialist chips for AI, asked the government to earmark some of the funds for UK manufacturers. That didnât happen, and this month Graphcore warned it needed an injection of cash to stay in business.
âWhatâs very weird is the government is saying that AI can do all this sort of stuff, itâs so powerful it can literally end the world,â Oxfordâs McBride says. âBut you would expect them to also be sort of investigating how to harness its power. The rest of the world is going to be looking to America and to the United Kingdom to figure out how they can use this stuff. And at the moment, the UK doesn't really have much to show the rest of the world.â
The UKâs parliament hasnât begun debating any domestic AI regulation on the scale of the European Unionâs AI Act, although the government has released a white paper that recommends a less restrictive set of rules in order to promote growth in the industry. But itâs a long way from being policy or law, and the EU has set the pace.
âIt is pretty embarrassing that the UK is not regulating itself,â says Mark Brakel, director of policy at the Future of Life Institute, a US think tank that focuses on existential risks. In the US, there are concrete proposals on regulation in the Senate. The EUâs AI Act is close to becoming law. Brazil is developing its own regulations, as is China, Brakel says. âBut we have nothing in the UK. If you're the hosts, I think it would make sense if you were able to put something on the table yourself.â
Brakel, whose institute was behind a headline-grabbing open letter in March that called for a pause on AI developments, is very supportive of the idea of the summit. The institute, which is backed by leading figures in tech, including Skype cocreator Jaan Tallinn, has been very active in lobbying governments to take existential risks seriously. But even Brakelâs hopes for the outcome of the UK event are quite limited. âThis is, I think, AI risk 101,â Brakel says. âI would be really happy if everyone leaving that summit is in agreement about what the most important risks are and what they need to focus on.â
That may not be enough for Sunak, whose government has expended considerable political capital assembling the summit. US vice president Kamala Harris is set to attend. But the UK has also invited a Chinese delegation, which has reportedly angered US officials, who see Beijing as a strategic threat. Reports in the UK press suggest that the Chinese officials may now only be allowed to attend half of the summit. European officials will be attendingâalthough France will host its own AI summit, organized by telecoms billionaire Xavier Niel, two weeks after the UK. On October 18, Chinaâs Cyberspace Administration announced its own global AI governance initiative.
The gatherings hosted by individual countries also have competition from international forums, including the UN and G7, which are looking into multilateral approaches for regulating AI. Itâs not clear how the UKâs approach will differâor if any state-to-state agreement capable of meaningfully changing the course of AI development is possible at such an early stage of development.
âI completely agree with [Sunakâs] strategy, which is to attempt international consensus. But my guess is international consensus will form only around the broadest of principles,â says Jeremy Wright, a former UK digital minister for Sunakâs Conservative Party. âFeasibly, if you're going to do anything, you probably have to do it nationally before you do it internationally.â
Two sources with knowledge of discussions confirmed Politicoâs reporting from earlier this month that Sunak will pitch an AI Safety Institute to attendees. And, they said, the British government will propose a register of frontier models that would let governments see inside the black box of frontier AI and get ahead of any potential dangers. The initiative will involve asking model developers to provide early access to their models so they can be âred teamedâ and their potential risks assessed.
Most of the big US companies have already signed up to an American government pledge on safety. Itâs not clear why theyâd feel the need to sign up to a new one, and commit to handing over valuable proprietary information to a UK body.
Critics of the UKâs doom summitâincluding members of the ruling Conservative Partyâfear it is doomed to, at best, mediocrity. The real reason, they say, that the summit has been rushed through is domestic politics. Itâs something that Sunak can show, or at least pretend, to be leading the world at a time he is trailing in polls and seen as almost certain to lose power in the next election. The evidence of that, several insiders point out, is the choice of venueâa 19th-century country mansion associated with a time when the UK truly was a top global power in computing.
Bletchley Park was where Britainâs World War II cryptographers cracked the Nazi's âEnigmaâ Code. The site is indelibly linked with one of the most significant figures in British computing, Alan Turingâwhich is, no doubt, why the government chose it. Practically, it makes less sense. Bletchley Park is 50 miles from London and âa pain in the arse to get to,â according to one government adviser, speaking on condition of anonymity because they still occasionally work for the Department of Science and Technology. But that distance doesnât make it conveniently remote and secure either. During the war, the campus was situated away from prying eyes, but it is now on the outskirts of Milton Keynes, a small city built after the war that has long been a punchline in the UK, synonymous with concrete blandness and famed for its profusion of roundabouts.
Itâs a venue that, like the summit itself, suggests to some that symbolism triumphed over substance. One tech executive, speaking on condition of anonymity because he was still hoping to deal with the government, calls it âgovernment by photo op.â Heâs taking solace in the fact that Sunakâs Conservative Party is likely to lose the next election, which has to be held before January 2025. âTheyâll be gone in 18 months,â he says.