It’s been a hot summer for global AI regulation. In a surprise move, California lawmakers voted for laws governing the largest frontier AI models. And, on August 1, after close to six years of deliberation and negotiation, the EU AI Act came into force.

The UK’s position on AI regulation however is less clear. Going for regulatory forbearance is a possible strategy for the UK. But the more big tech players and enterprises lean into global regulations, the less control and wiggle room the UK will have to go regulate in its own way.

At first, the US was also moving cautiously on AI regulation. In October 2022 the White House published a Blueprint for an AI Bill of Rights, outlining sensible trustworthy and voluntary AI principles. Biden went a step further in October 2023 by signing an executive order on trustworthy AI, still policy oriented but cleverly targeted at US agencies. They tend to listen a lot better to recommendations proposed by their commander in chief than say the average business.

Given the US elections perhaps it was right not to expect too much progress; Trump for instance promised to axe Biden’s exec order if elected. But then regulators stepped in on big tech’s doorstep in the state of California. What the state’s lawmakers propose goes further than Biden’s exec order. They are not talking policy and soft law, but hard regulation.

Its scope is measured and limited: frontier AI models that cost $100M to train (or are above some large threshold of computing power) will be subject to additional testing, documentation, governance, audit and take-down requirements. This will include models from Silicon Valley players such as OpenAI and Google. If adopted, the CA AI law will likely immediately affect the UK market as these models and services are used globally. Not that the UK had a say in what these exact rules are.

EU AI Act

The EU AI Act has a much broader scope. It covers general purpose foundation models of a certain size, but also AI systems built with any kind of AI, from simple, traditional AI to the latest and greatest technologies.

The act focuses on the purpose of the AI system and its associated risk. A small set of uses are prohibited. A larger range of uses are classified high risk, so allowed but with additional requirements. Large general purpose models are rated limited risk and the vast majority of uses are classified minimal or no risk with only a voluntary code of conduct.

The impact for UK enterprises? If any of them want to do business with European consumers they will have to comply, and if at some point UK regulation is implemented, expect banks, telcos, insurers and others to lobby for keeping any differences with the EU AI Act to a minimum. And the UK would wise to keep a close eye on what EU AI policies work well and which not as the act is being rolled out, to inform their own regulatory efforts.

UK needs a pro-active approach 

What has the UK done in the mean time on AI regulation? Bar some abstract statement in the King’s speech on “seeking to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence modelsand pulling £1.3 billion on AI and supercomputing investments, nothing much. But you can’t avoid global regulatory pressure this way.

At its core, AI ethics and regulation shouldn’t be a global rat race or competition. It is in everyone’s interest that good, trustworthy uses of AI are stimulated and bad use of AI is prevented. And it is understandable that national preferences for opportunity versus risk and regulation also need to be taken into account.

A positive step forward would be a more proactive approach from UK lawmakers. Putting a firmer stake in the ground that UK law will be forthcoming, and aligning it with US and EU law where desired, and taking a different approach where needed, will be a great step forward. It will also allow organisations to be more prepared for what’s coming their way in AI regulation, and not just from a European or US perspective. 



Source link