Jack Silva | norphoto | Getty Images
LONDON – The United Kingdom says it wants to do “its own thing” when it comes to regulating artificial intelligence, signaling a potential divergence from the approaches taken by its main Western counterparts.
“It's really important that we in the UK do our own thing when it comes to regulation,” Ferial Clarke, Britain's minister for artificial intelligence and digital government, told CNBC in an interview broadcast on Tuesday.
She added that the government already has a “good relationship” with AI companies such as OpenAI and Google DeepMind, which have voluntarily opened up their models to the government for safety testing purposes.
“It's really important that we ensure that safety is in place at the beginning when the models are being developed… That's why we will work with the industry on any safety measures that are put forward,” Clark added.
Her comments echoed Prime Minister Keir Starmer's comments on Monday that Britain has “the freedom now in terms of regulation to do it in the way we think is best for the UK” after Brexit.
“You have different models around the world, you have the EU approach and the US approach – but we have the ability to choose the model that we think is in our interests and we intend to do that,” Starmer said. In response to a journalist's question after announcing a 50-point plan to make the UK a world leader in artificial intelligence.
Difference from the United States and the European Union
So far, Britain has refrained from introducing formal laws to regulate AI, instead referring to individual regulatory bodies to impose existing rules on companies when it comes to developing and using AI.
This differs from the European Union, which has introduced comprehensive pan-European legislation aimed at harmonizing technology rules across the bloc while taking a risk-based approach to regulation.
Meanwhile, the United States lacks any regulation of AI at all at the federal level, and has instead adopted a patchwork of regulatory frameworks at the state and local levels.
During Starmer's election campaign last year, Labor committed in its manifesto to introducing regulation focused on so-called “frontier” AI models – a reference to large language models such as OpenAI's GPT.
However, so far, the UK has yet to confirm details of the proposed AI safety legislation, instead saying it will consult industry before proposing formal rules.
“We will work with the sector to develop this and deliver it in line with what we said in our statement,” Clark told CNBC.
Chris Mooney, partner and head of commercial affairs at London-based law firm Marriott Harrison, told CNBC that the UK is taking a “wait and see” approach to regulating AI even as the EU moves forward with its own AI law.
“While the UK government says it has taken a ‘pro-innovation’ approach to regulating AI, our experience working with clients is that they find the current situation uncertain, and therefore unsatisfactory,” Mooney told CNBC via email.
One area where the Starmer government has talked about reforming AI rules has been around copyright.
Late last year, the UK opened a consultation to review the country's copyright framework to assess potential exceptions to existing rules for AI developers who use the works of artists and media publishers to train their models.
Businesses were left uncertain
Sachin Dev Duggal, CEO of London-based AI startup Builder.ai, told CNBC that although the government's AI action plan “shows ambition,” going ahead without clear rules is “borderline reckless.”
“We have already missed important regulatory windows twice – first with cloud computing and then with social media,” Dougal said. “We cannot afford to make the same mistake with AI, where the risks are exponentially higher.”
He added: “UK data is our crown jewel; it should be leveraged to build sovereign AI capabilities and create British success stories, not just feed external algorithms that we cannot effectively regulate or control.”
Details of Labour's plans to legislate artificial intelligence were initially expected to appear in King Charles III's speech that opened the UK Parliament last year.
However, the government has only committed to putting in place “appropriate legislation” on the most powerful AI models.
“The UK government needs to provide clarity here,” John Byers, international head of AI at law firm Osborne Clark, told CNBC, adding that he had learned from sources that a consultation on formal AI safety laws was “waiting to be released.”
“By issuing consultations and plans on a piecemeal basis, the UK missed the opportunity to provide a comprehensive view on where its AI economy is headed,” he said, adding that not disclosing details of new AI safety laws would lead to uncertainty. Certainty for investors. .
However, some figures in the UK technology scene believe that a more flexible and flexible approach to regulating AI may be the right one.
“From recent discussions with the government, it is clear that there are significant efforts underway on AI safeguards,” Ross Shaw, founder of advocacy group Tech London Advocates, told CNBC.
He added that the UK is well placed to adopt a “third way” on AI safety and regulation – “sector-specific” regulations governing different industries such as financial services and healthcare.