Frontier AI Taskforce starts recruitment drive

The chair of UK’s Frontier AI Taskforce, Ian Hogarth, has released the second progress report on the Frontier AI Taskforce.

The report follows on from the first Fronier AI report, published last week. The report notes that the hardest challenge faced in building the taskforce is persuading leading AI researchers to join the government.

“Compensation for machine learning researchers has ballooned in recent years. Beyond money, the prestige and learning opportunities from working at leading AI organisations are a huge draw for researchers,” the authors of the report noted.

“We can’t compete on compensation, but we can compete on mission. We are building the first team inside a G7 government that can evaluate the risks of frontier AI models. This is a crucial step towards meaningful accountability and governance of frontier AI companies, informed by the science and motivated by the public interest.”

The taskforce announced OpenAI’s Jade Leung, who specialises in safety protocols and governance of frontier AI systems, has joined the team. Rumman Chowdhury from Humane Intelligence, who previously led the META (ML Ethics, Transparency, and Accountability) team at Twitter, is also now onboard. 

As part of the recruitment drive, the Frontier AI Taskforce has posted two new jobs on LinkedIn. The first is for a senior research engineer. “We look for candidates who care deeply about the societal impacts and long-term implications of their work and want to ensure a better future for the world,” the job post states.

According to the LinkedIn job, the taskforce is looking for someone with “excellent knowledge of training, fine-tuning, scaffolding, prompting, deploying, and/or evaluating current cutting-edge machine learning systems such as LLMs and Diffusion Models”.

The second vacancy is for a senior software engineer. Here, the successful candidate will need to have “substantial experience in building software systems to meet user requirements, managing increasing scale, and upholding privacy and security standards”, according to the post on LinkedIn. The vacancy suggests the role will involve building tools for the taskforce to help the team improve workflows whenever off-the-shelf software is not readily available or good enough.

The taskforce announced two further partnerships, bringing the total to 11. Apollo has been taken on board to help the team at Frontier AI Taskforce better understand the risks associated with potential loss of human control over AI systems. OpenMined, the global non-profit building open source AI governance infrastructure, is also partnering with Frontier AI Taskforce.

“We are working with OpenMined to develop and deploy technical infrastructure that will facilitate AI safety research across governments and AI research organisations,” Frontier AI Taskforce said.

Recognising the need for compute infrastructure for researching AI safety large-scale interpretability experiments, the taskforce said it has supported DSIT (the Department for Science, Innovation and Technology) and the University of Bristol to help launch major investments in compute.

The first component of the UK’s AI Research Resource, Isambard-AI, will be hosted in Bristol. When built, this supercomputer is said to be one of the most powerful in Europe, and, according to the UK’s Frontier AI Taskforce, will vastly increase our public-sector AI compute capacity.

“There is more to do, but these great strides fundamentally change the kind of projects researchers can take on inside the Taskforce,” the second progress report stated.

The AI Safety Summit is taking place November 1 and 2 at Bletchley Park.

Exit mobile version