Debate over the existential risks of artificial intelligence marked a conference at Bletchley Park in the UK. Home to British World War II Nazi code-breakers, the site hosted critics and supporters of the new technology.
It could have been at Bletchley Park where the famous British mathematician and encryption analyst Alan Turing began to think about the question of whether or not machines can think. And maybe it was here where he found answers that finally led to the invention of the famed Turing machine, a precursor to what would become the computer.
Turing would have fit right in at the summit, where some of the globe’s smartest tech brains gathered until Thursday to discuss the future of artificial intelligence (AI) — its risks and opportunities that are only just emerging in the wake of OpenAI’s launch of ChatGPT in November last year.
The summit near London was studded with high-profile politicians, academics and tech-company leaders like Tesla’s Elon Musk, Meta Platforms’ chief AI scientist Yann LeCun and DeepMind co-founder Demis Hassabrisks.
Declaration on ‘frontier AI models’
On Wednesday, the UK government unveiled its so-called Bletchley Declaration, a communique condoned by 29 signatories, including China and the European Union. The declaration praises the technology’s potential to enhance human wellbeing but also urges to mitigate its risks.
AI is beginning to shape our daily lives like no other technology before. In the UK for example, the government uses AI tools to decide on social-benefit claims. Teachers there will soon have their own AI-powered teaching assistant, helping them plan lessons.
However, as the technology is advancing with lightning speed, a fierce debate is on about how to deal with the dangers emerging from AI such as fueling discrimination and misinformation, or even putting democracy and human civilization in danger.
In closed-door sessions at the summit, there were discussions about whether to pause the development of next-generation “frontier” AI models and the “existential threat this technology may pose to democracy, human rights, civil rights, fairness, and equality,” the British government summed up the debates in a statement.
The UK Prime Minister’s envoy Matt Clifford even found what he heard “beyond the apprehension of what I had imagined.” He told reporters that the Bletchley Declaration was the first “statement of risk, and one of the first definitions of frontier AI.”
Doomers vs tech believers, and how to regulate AI
In the run-up to the summit, there was disagreement among the participants over whether to prioritize immediate risks from AI or concerns that it could lead to the end of the world. Some experts had also been warning the summit could focus too much on self-regulation.
Canadian computer scientist Yoshua Bengio was among the experts giving evidence and advice to a broader audience at the summit. Widely regarded as one of the “godfathers” of AI technology, he told DW that the meeting marked a “historic moment,” because governments were starting to “take the risks of AI seriously.”
Ever since the launch of Chat GPT, Bengio has been warning about the dangers of AI, worrying that humans could lose control over the systems when they would not behave as instructed. “Maybe already the next generation [of tools] that is coming in 2024 could be very dangerous. Governments need to start preparing for this,” he said.
Bengio is calling for stringent regulation, with the onus being on tech firms to demonstrate that their models are safe. Only after they have proven that should they be allowed to build them, he added.
Dame Wendy Hall, a professor of computer science and another UK government advisor on AI, also thinks the technology should not be left to the companies to regulate.
“That would be like asking the tobacco industry to stop smoking”, she told the BBC in the run-up to the summit.
Germany’s Minister for Digital Affairs and Transportation, Volker Wissing, is less convinced about a need for tough rules. He mainly sees opportunities, where others see risks. “We want the best technology, not the best regulators,” he told DW, adding that it was “important to preserve our competitiveness … we need trustworthy AI from Europe.”
Wissing thinks it’s not needed to create a special German AI regulator, as regulation “should be coordinated on a European and international level.”
The United States and Britain, meanwhile have announced they are in the process of creating national AI safety institutes. In an interview for news agency Reuters, Tesla CEO Elon Musk said there was “concern that governments could jump the gun on rules before knowing what to do.”
Musk described AI as “one of the biggest dangers to humanity,” and told Reuters somewhat philosophically that “here we are in human history with something that is far more intelligent than us.” He also said he wasn’t so sure “people can actually control such a thing … but merely guide it in a direction that is beneficial to humanity.”
The beginning of a process
At the end of the conference on Thursday, the attendees agreed that the summit could be only the beginning of a “long process,” and that they will meet again in South Korea in about six months and in France in a year’s time.
How humans and machines can live happily side by side is possibly THE challenge of our time. Asked about what keeps him up at night, Yoshua Bengio mentioned the fragility of democracy.
“If we are not able to preserve democracy, we will not be able to deal with all the other threats that may come in the future.”