UK government responds to AI whitepaper consultation

The UK government has said it will consider creating “targeted binding requirements” for select companies developing highly capable artificial intelligence (AI) systems, as part of its long-awaited response to the AI whitepaper consultation.

The government also confirmed that it will invest more than £100m in measures to support its proposed regulatory framework for AI, including various AI safety-related projects and a series of new research hubs across the UK.  

Published March 2023, the whitepaper outlined the governments “pro-innovation” proposals for regulating AI, which revolve around empowering existing regulators to create tailored, context-specific rules that suit the ways the technology is being used in the sectors they scrutinise.

It also outlined five principles that regulators must consider to facilitate “the safe and innovative use of AI” in their industries, and generally built on the approach set out by government in its September 2021 national AI strategy which sought to drive corporate adoption of the technology, boost skills and attract more international investment.

In response to the public consultation – which ran from 29 March to 21 June 2023 and received 406 submissions from a range of interested parties – the government generally reaffirmed its commitment to the whitepaper’s proposals, claiming this approach to regulation will ensure the UK remains more agile than “competitor nations” while also putting it on course to be a leader in safe, responsible AI innovation.

“The technology is rapidly developing, and the risks and most appropriate mitigations, are still not fully understood,” said the Department of Science, Innovation and Technology (DSIT) in a press release.

“The UK government will not rush to legislate, or risk implementing ‘quick-fix’ rules that would soon become outdated or ineffective. Instead, the government’s context-based approach means existing regulators are empowered to address AI risks in a targeted way.”

Potential for binding requirements

As part of its response, the government outlined its “initial thinking” for binding requirements in the future, which it said “could be introduced for developers building the most advanced AI systems” to ensure they remain accountable.

“Clearly, if the exponential growth of AI capabilities continues, and if – as we think could be the case – voluntary measures are deemed incommensurate to the risk, countries will want some binding measures to keep the public safe,” said the formal consultation response, adding that “highly capable” general-purpose AI systems challenge the government’s context-based approach due to how such systems can cut across regulatory remits.

“While some regulators demonstrate advanced approaches to addressing AI within their remits, many of our current legal frameworks and regulator remits may not effectively mitigate the risks posed by highly capable general-purpose AI systems.”

Más contenido para leer:  Pig butchers caught using ChatGPT to con victims

It added that while existing rules and laws are frequently applied to the deployment or application level of AI, the organisations deploying or using these systems may not be well placed to identify, assess, or mitigate the risks they can present: “If this is the case, new responsibilities on the developers of highly capable general-purpose models may more effectively address risks.”

However, the government was also clear that it will not rush to legislate for binding measures, and that any future regulation would ultimately be targeted at the small number of developers of the most powerful general-purpose systems.

“The government would consider introducing binding measures if we determined that existing mitigations were no longer adequate and we had identified interventions that would mitigate risks in a targeted way,” it said.

“As with any decision to legislate, the government would only consider introducing legislation if we were not sufficiently confident that voluntary measures would be implemented effectively by all relevant parties and if we assessed that risks could not be effectively mitigated using existing legal powers.”

It also committed to conducting regular reviews of potential regulatory gaps on an ongoing basis: “We remain committed to the iterative approach set out in the whitepaper, anticipating that our framework will need to evolve as new risks or regulatory gaps emerge.”

A gap analysis already conducted by the Ada Lovelace Institute in July 2023, found that because “large swathes” of the UK economy are either unregulated or only partially regulated, it is not clear who would be responsible for scrutinising AI deployments in a range of different contexts.

This includes recruitment and employment practices, which are not comprehensively monitored; education and policing, which are monitored and enforced by an uneven network of regulators; and activities carried out by central government departments that are not directly regulated.

According to digital secretary Michelle Donelan, the UK’s approach to AI regulation has already made the country a world leader in both AI safety and AI development.

“AI is moving fast, but we have shown that humans can move just as fast,” she said. “By taking an agile, sector-specific approach, we have begun to grip the risks immediately, which in turn is paving the way for the UK to become one of the first countries in the world to reap the benefits of AI safely.”

New funding

In terms of the new funding announced to realise the ambitions of its proposed approach, the government has committed nearly £90m toward launching nine new research hubs, which are designed to help harness the potential of the technology in key fields such as healthcare, chemistry, and mathematics.

Más contenido para leer:  Pure ofrece almacenamiento de objetos local como fuente de datos Snowflake

A further £19m will be invested in 21 “responsible AI” projects to help accelerate their deployment, while £2m of Arts & Humanities Research Council (AHRC) funding will be given to projects looking to define responsible AI.

The government also committed £10m to preparing and upskilling UK regulators, which will help them develop “cutting-edge research and practical tools” to monitor and address the use of AI in the sectors they regulate.

“Many regulators have already taken action. For example, the Information Commissioner’s Office has updated guidance on how our strong data protection laws apply to AI systems that process personal data to include fairness and has continued to hold organisations to account, such as through the issuing of enforcement notices,” said DSIT.

“However, the UK government wants to build on this by further equipping them for the age of AI as use of the technology ramps up. The UK’s agile regulatory system will simultaneously allow regulators to respond rapidly to emerging risks, while giving developers room to innovate and grow in the UK.”

DSIT added that, in a drive to boost transparency and provide confidence for both British businesses and citizens, key regulators such as Ofcom and the Competition and Markets Authority (CMA) have been asked to publish their respective approaches to managing the technology by 30 April 2024.

“It will see them set out AI-related risks in their areas, detail their current skillset and expertise to address them, and a plan for how they will regulate AI over the coming year,” it said.

The copyright issue

On 4 February 2024, a day before the whitepaper consultation response, the Financial Times reported that the UK is temporarily shelving its long-awaited code of conduct on the use of copyrighted material in AI training models due to disagreements between industry executives on what a voluntary code of practice should look like.

It reported that while the AI companies want easy access to vast troves of content for their models, creative industries companies are concerned they will not be fairly compensated for the models use of their copyrighted materials. 

In the consultation response, the government said: “It is now clear that the working group [of industry executives] will not be able to agree an effective voluntary code.” It added that ministers will now lead on further engagement with AI firms and rights holders.

Más contenido para leer:  Los servicios básicos de la Biblioteca Británica regresarán para el nuevo año académico

“Our approach will need to be underpinned by trust and transparency between parties, with greater transparency from AI developers in relation to data inputs and the attribution of outputs having an important role to play,” it said.

“Our work will therefore also include exploring mechanisms for providing greater transparency so that rights holders can better understand whether content they produce is used as an input into AI models.”

According to Greg Clark – chair of the House of Commons Science, Innovation and Technology Committee (SITC), which is conducting an ongoing inquiry into the UK’s governance proposals for AI – existing copyright laws in the UK may not be suitable for managing how copyrighted material is used in AI training models.

He said this is because there are “particular” challenges presented by AI that may require the existing powers to be updated, such as whether it’s possible to trace the use of copyrighted material in AI models or what degree of dilution from the original copyrighted material is acceptable.

“It’s one thing if you take a piece of music or a piece of writing…and dust it off as your own or someone else’s, the case law is well established,” he said. “But there isn’t much case law, at the moment as I understand it, against the use of music in a new composition that draws on hundreds of thousands of contributors. That is quite a new challenge.”

In a report published 2 February 2024, the committee later urged the government not to “sit on its hands” while generative AI developers exploit the work of rightsholders, and rebuked tech firms for using data without permission or compensation.

Responding to a copyright lawsuit filed by music publishers, generative AI firm Anthropic claimed in January 2024 that the content ingested into its models falls under ‘fair use’, and that “today’s general-purpose AI tools simply could not exist” if AI companies had to pay copyright licences for the material.

It further claimed that the scale of the datasets required to train LLMs is simply too large to for an effective licensing regime to operate: “One could not enter licensing transactions with enough rights owners to cover the billions of texts necessary to yield the trillions of tokens that general-purpose LLMs require for proper training. If licences were required to train LLMs on copyrighted content, today’s general-purpose AI tools simply could not exist.”

Nuestro objetivo fué el mismo desde 2004, unir personas y ayudarlas en sus acciones online, siempre gratis, eficiente y sobre todo fácil!

¿Donde estamos?

Mendoza, Argentina

Nuestras Redes Sociales