Government AI taskforce appoints new advisory board members

The UK government is filling out the ranks of its artificial intelligence (AI) taskforce advisory board with figures from industry, academia and national security amid a rebrand.

First announced in April 2023 as the AI Foundation Model Taskforce, along with £100m funding from the government, the body was created to take forward cutting-edge AI safety research and advise government on the risks and opportunities associated with the technology.

Now known as the Frontier AI Taskforce, an initial progress report published on 7 September 2023 by the Department for Science, Innovation and Technology (DSIT) described the taskforce as a “startup inside government” and noted a core goal was to give public sector researchers the same resources to work on AI safety that they would find at companies like Anthropic, DeepMind or OpenAI.

“As AI systems become more capable, they may significantly augment risks. An AI system that advances towards human ability at writing software could increase cyber security threats. An AI system that becomes more capable at modelling biology could escalate biosecurity threats,” it said. “To manage this risk, technical evaluations are critical – and these need to be developed by a neutral third party, otherwise we risk AI companies marking their own homework.”

It added in a press release that the taskforce would also have a particular focus on assessing systems that pose significant risks to public safety and global security: “Frontier AI models hold enormous potential to power economic growth, drive scientific progress and wider public benefits, while also posing potential safety risks if not developed responsibly.”

Más contenido para leer:  Royal Mail spent £10m on cyber measures after LockBit attack

The progress report also details seven new appointments to the taskforce’s advisory board, including Turing Prize laureate Yoshua Bengio; co-founder of the Alignment Research Centre (ARC) Paul Christiano, who previously ran OpenAI’s language model alignment team; director of GCHQ, the UK’s signals intelligence agency, Anne Keast-Butler; and chair of the Academy of Medical Royal Colleges, Helen Stokes-Lampard.

Other appointments include Alex Van Sommeren, the UK’s chief scientific adviser for national security, who was previously a venture capital investor and entrepreneur focused on deep-tech startups; Matt Collins, the UK’s deputy national security adviser for intelligence, defence and security, to which the government added “IYKYK”, meaning “if you know you know”; and Matt Clifford, who is prime minister Rishi Sunak’s joint representative for the upcoming AI Safety Summit.

It also announced that Oxford academic Yarin Gal will be the first taskforce research director, while Cambridge academic David Kreuger will be working with the taskforce in a consultative role as it scopes its research programme in the run-up to November’s summit.

Ollie Ilott, who previously led Sunak’s domestic private office and the Cabinet Office’s Covid strategy team in the first year of the pandemic, has been brought in as director of the taskforce.

“Thanks to a huge push by the taskforce team, we now have a growing team of AI researchers with over 50 years of collective experience at the frontier of AI,” said the report. “If this is our metric for state capacity in frontier AI, we have managed to increase it by an order of magnitude in just 11 weeks. Our team now includes researchers with experience from DeepMind, Microsoft, Redwood Research, the Center for AI Safety and the Center for Human Compatible AI.”

Más contenido para leer:  El PSNI dirigió una unidad secreta para "monitorear los teléfonos de periodistas y abogados", afirma un ex alto funcionario

Frontier AI Taskforce chair Ian Hogarth, an angel investor and tech entrepreneur who was appointed to the position in June, said: “I am pleased to confirm the first members of the taskforce’s external advisory board, bringing together experts from academia, industry and government with diverse expertise in AI research and national security.

“We’re working to ensure the safe and reliable development of foundation models, but our efforts will also strengthen our leading AI sector and demonstrate the huge benefits AI can bring to the whole country to deliver better outcomes for everyone across society.”

Technology secretary Michelle Donelan added that the appointments were a “huge vote of confidence in our status as a flagbearer for AI safety”.

The advisory board appointments come in the same week that the Trades Union Congress (TUC) launched its own AI taskforce, which is specifically focused on pushing for new laws to safeguard workers’ rights and ensure the technology has broad social benefits.

The taskforce – which the TUC said would corral specialists in law, technology, politics, HR and the voluntary sector – is due to publish an AI and Employment Bill early in 2024, and will lobby to have the bill incorporated into UK law.

In May 2023, backbench Labour MP Mick Whitley introduced a worker-focused AI bill to Parliament through the 10-minute motion rule, which is based around three core assumptions: that everyone should be free from discrimination at work; that workers should have a say in decisions affecting them; and that people have a right to know how their workplace is using the data it collects about them.

Más contenido para leer:  Microsoft extiende el paraguas de Defender a Google Cloud Platform

Although 10-minute rule motions rarely become law, they are often used as a mechanism to generate debates on an issue and test opinion in the Parliament. As Whitley’s bill received no objections, it has been listed for a second reading on 24 November 2023.

Nuestro objetivo fué el mismo desde 2004, unir personas y ayudarlas en sus acciones online, siempre gratis, eficiente y sobre todo fácil!

¿Donde estamos?

Mendoza, Argentina

Nuestras Redes Sociales