Lords begin inquiry into large language models

The House of Lords Communications and Digital Committee has launched an inquiry into the risks and opportunities presented by large language models (LLMs), and how the UK government should respond to the technology’s proliferation.

LLMs are a type of machine learning that underpin generative artificial intelligence (AI) tools like OpenAI’s ChatGPT or Stability AI’s Stable Diffusion, which are trained on massive amounts of data to generate mostly text-based outputs.

Officially launched with a call for written evidence at the start of July 2023, the inquiry is now in the process of holding oral evidence sessions with various expert witnesses, with a particular focus on how LLMs will develop over the next three years and how the government should approach the tech going forward.

“The latest large language models present enormous and unprecedented opportunities. Early indications suggest seismic and exciting changes are ahead,” said committee chair Baroness Beeston.

“But we need to be clear-eyed about the challenges. We have to investigate the risks in detail and work out how best to address them – without stifling innovation in the process. We also need to be clear about who wields power as these models develop and become embedded in daily business and personal lives.

“This thinking needs to happen fast, given the breakneck speed of progress. We mustn’t let the most scary of predictions about the potential future power of AI distract us from understanding and tackling the most pressing concerns early on. Equally, we must not jump to conclusions amid the hype.”

First evidence session

During the first evidence session on 12 September, Ian Hogarth, an angel investor and tech entrepreneur who is now chair of the government’s Frontier AI Taskforce, noted the ongoing development and proliferation of LLMs would largely be driven by access to resources, in both financial and computing power terms. 

“Compute is getting cheaper, so more people are able to build these models. If we use history as a guide, when OpenAI trained GPT-3, there was one model at the scale of GPT-3 in the world, and now there must be 100-plus. When OpenAI trained GPT-4, there was one GPT-4 scale model, and there will probably be 10 by the middle of next year,” he said.

Más contenido para leer:  Presupuestos de TI bajo presión debido a las malas perspectivas económicas

“I believe that it may not continue quite as quickly, but that we will stay on an exponential where you see a 10 orders of magnitude increase in compute in a given period. The reason for that is that there is a huge amount of money to be made.

“These tools will be used for lots of commercial purposes, so the amount of money being invested in making these systems more powerful will increase, as it has already. A decade ago, $20m was invested in companies trying to build super-intelligent AI, and now it is $20bn. There is a race occurring between companies and countries to build these very powerful systems.”

In terms of his work in the Taskforce, Hogarth added its focus for the past 11 weeks since launching has been using its £100m budget to bring top-level technical talent into government, so that it can more easily reckon with the challenges presented by various forms of AI and compete with expertise available in the private sector.

“We have 10 people so far with real frontier expertise, with PhD-level through to professor-level experience in the field,” he said. “These are some of the hardest to hire people in the world right now. It is a real challenge. You are offering to bring people into the public sector when they are being offered 10 times that amount to stay in the private sector.”

However, Neil Lawrence, a professor of machine learning at the University of Cambridge and former advisory board member at the government’s Centre for Data Ethics and Innovation, noted the £100m earmarked for the Taskforce pales in comparison to other sources of government funding.

“Mr Hogarth is here representing a £100m investment over five years. The [UK Research and Innovation] UKRI budget is something on the order of £7bn a year,” he said, adding very little attention has been given to previous public investments into AI, including £30m given to the Trustworthy Autonomous Systems Hub.

“I am a little nervous about all this attention on quite a small investment. Let us be very frank: I appreciate that it is public money but, given the scale of investment we are talking about, and the scale of the challenge, which is to revolutionise the way we think about many of our institutions, this is not a lot of money.”

Más contenido para leer:  Verizon y Nvidia lanzan servicio de inteligencia artificial impulsado por 5G para empresas

Commenting on developments in the US, Lawrence added it was increasingly becoming accepted that the only way to deal with AI there is to let big tech take the lead: “My concern is that, if large tech is in control, we effectively have autocracy by the back door. It feels like, even if that were true, if you want to maintain your democracy, you have to look for innovative solutions.”

Likening this to the production of written texts before the invention of the printing press, he further added part of the problem with AI is “that computers are being controlled by the modern equivalent of scribes”.

“The software engineering profession exists in the modern equivalent of guilds and has an incredible amount of power over governments. A lot of the things we are looking at are about how to deal with those power asymmetries. During my time at the AI Council, when it existed, this was the type of question that we were concerned about.”

Trust and accountability

Lawrence and others also warned that LLMs have the potential to massively reduce trust and accountability if given too great a role in decision-making.

“I sit here before you wearing my dead brother’s jacket – he was a lawyer – because I want him with me today. A large language model can never say that to you. I sit here with a reputation. The evidence I give you is based on the work I have put in and who I am in society,” he said.

“We are getting very distracted by the technicalities of it, when, a lot of the time, we should be looking at how we ensure that these [models] are empowering people in their decision-making, not replacing people for consequential decision-making.”

The threat of “consequential decision-making” being effectively outsourced to LLMs was also a major concern highlighted by the written evidence of Dan McQuillan, a lecturer in creative and social computing who previously spoke to Computer Weekly about the need for “prefigurative” social change to resist the imposition of AI on society by a relatively small group of people in the government and private sector.

Más contenido para leer:  Una sólida gestión de identidades y accesos en la nube debería alinearse con los principios de confianza cero

“The greatest risk posed by large language models is seeing them as a way to solve underlying structural problems in the economy and in key functions of the state such as welfare, education and healthcare,” he wrote.

“The misrepresentation of these technologies means it’s tempting for businesses to believe they can recover short-term profitability by substituting workers with large language models, and for institutions to adopt them as a way to save public services from ongoing austerity and rising demand.

“There is little doubt that these efforts will fail. The open question is how much of our existing systems will have been displaced by large language models by the time this becomes clear, and what the longer-term consequences of that will be.”

He added that while there will undoubtedly be “interesting technical developments” in LLMs over the next three years, none of these developments will overcome the foundational problems that prevent them from being trustworthy, unbiased or truly productive.

“In large language models, the most intractable flaw is that their operations are optimised on plausibility not causality. In other words, they generate responses which are statistically similar to those in their training dataset, refined by a set of additional guidelines for believability and non-toxicity but with no mechanism for checking facticity, so we will never be able to fully believe them even when they ‘sound right’.”

The launch of the committee’s inquiry is the latest of many Parliamentary inquiries that have been set up to investigate various aspects of AI. Others include an inquiry into AI governance launched in October 2022, an inquiry into autonomous weapons system launched January 2023, and another into generative AI launched in July 2023.

A Lords inquiry into the use of artificial intelligence and algorithmic technologies by UK police concluded in March 2022 that the tech is being deployed by law enforcement bodies without a thorough examination of their efficacy or outcomes, and that those in charge of those deployments are essentially “making it up as they go along”.

Nuestro objetivo fué el mismo desde 2004, unir personas y ayudarlas en sus acciones online, siempre gratis, eficiente y sobre todo fácil!

¿Donde estamos?

Mendoza, Argentina

Nuestras Redes Sociales