Risks of opening up AI

Earlier in July, Meta, the parent company of Facebook, announced the availability of Llama 2, which will be made available in the Microsoft Azure AI model catalogue, as well as AWS.

Meta describes Llama 2 as “the next generation” of its open source large language model. But while it is free for research and commercial use, Amanda Brock, CEO of OpenUK, said that the community licence for Llama 2 imposes an acceptable use policy.

Brock believes this is a significant clause in the licence, as it limits completely unrestricted access to the algorithm. “The reason it’s not open source is that there’s a restriction on the number of users you can have. This restriction means it can’t be used by anyone for any purpose.”

She believes that it is unlikely that artificial intelligence (AI) large language models will be completely open source. “Everybody who owns them knows that there’s going to be regulation. Opening up Llama 2, enables open innovation and community engagement, allowing people to start to building things,” she added.

According to Brock, the challenge facing regulation of open source AI models is who is liable for the algorithm, the data model and the outputs produced by that model. “Is it the productivity tool that effectively uses AI or will liability sit with the person asking the question?”

Brock believes that among the areas policymakers and regulators need to consider is the difficulty in predicting how AI will be used. Drawing an analogy with the evolution of the web and the internet, Brock said: “The reality of AI is that it’s like the internet back in 2000, where people did not really understand what it would be used for. No one would have said social media would be one of the key uses.”

Más contenido para leer:  AgriFood Connect recurre a Telstra para la primera implementación empresarial privada de 5G de Ericsson en la industria

Looking at AI today, Brock said it is deployed as a productivity tool. “The use of AI productivity tools is what we should be focusing on and how it will be used, particularly when we come to regulation,” she said.

Brock said that legal liability is driving the conversation around AI regulation. But there is a difference between the algorithm or data model and the output produced by that data model. “I think we’re going to see a split into these two things, in terms of the products we consume, particularly as businesses,” she added.

The UK government’s approach to AI regulation is positioned as “pro innovation”, but there is little in the way of guidance on the question of liability. The European Union’s AI Act bans the use of artificial intelligence to profile people based on race or gender and prohibits use of biometric identification in public spaces. The EU is also considering an AI liability directive, which aims to simplify the process by which an injured party can bring civil liability claims against AI developers and the organisation utilising AI in their products and services.

A blog from law firm Debevoise & Plimpton notes that the proposed AI Liability Directive would change the legal landscape for companies developing and implementing AI in EU member states by significantly lowering evidentiary hurdles for victims injured by AI-related products or services to bring civil liability claims.

According to a briefing published in May this year by law firm Womble Bond Dickinson, about the EU proposal, the starting point for the judgement is an assumption that the action or output of the AI was caused by the AI developer or user against whom the claim is filed. While this presumption will be rebuttable, the briefing document notes that organisations will need to document how they are using AI technologies, including steps that have been taken to protect individuals from harm.

Más contenido para leer:  Arm flexiona la economía de IoT con hardware virtual basado en soluciones

Microsoft and Meta aim to offer an open approach to provide increased access to foundational AI technologies to benefit businesses globally. Several high-profile executives in tech firms, along with academic researchers have put their names to Meta’s so-called “open innovation approach to AI”. These include Nikesh Arora, CEO, Palo Alto Networks; Ian Buck, vice-president of hyperscaler and HPC at Nvidia; Lan Guan, global data and AI lead at Accenture; Brad McCredie, corporate vice-president, datacenter GPU and accelerated processing at AMD; and Bill Higgins, director of watsonx platform engineering at IBM.

A  group of organisations including GitHub, Hugging Face, EleutherAI, and Creative Commons, among others, is calling for EU to consider the implications of open and open source AI under the EU AI Act.

In a document outlining their concerns, the group wrote: “Decades of open source experience should inform the AI Act as should these parallel legislative files. However, it is worth noting that definitions of open source AI are not yet fixed and will have to grapple with the complex interactions between the different components of an AI system.

“As AI model development has moved from expensive training from scratch to further training of open pre-trained models, the openness of the code, documentation, model and meaningful transparency about the training data empower anyone to use, study, modify and distribute the AI system as they would open source software.”

Brock is a supporter of open data access and open AI is a logical extension of this. “The democratisation of AI and opening innovation around it to enable collaboration across our open communities is an essential step in the future of this most impactful of technologies,” she said.

Más contenido para leer:  Contratistas de TI frustrados excavan en la dark web en busca de sus datos

But while the EU’s proposals are an indicator showing the direction lawmakers are looking to take regarding AI liability, an open approach to AI has the potential to develop into a cobweb of legal obligations and liability. For anyone looking to build an AI-powered software there are numerous pitfalls that may put them at risk from a legal liability perspective, such as weaknesses in the implementation of the AI inference engine or the learning algorithm; flaws in the data models or the data sources may themselves introduce errors or embed biases.

While extremely promising, until such time that EU and UK laws on AI liability are clear, the legal risks associated with an open approach to AI may deter some from making the most of the opportunity AI offers.

Nuestro objetivo fué el mismo desde 2004, unir personas y ayudarlas en sus acciones online, siempre gratis, eficiente y sobre todo fácil!

¿Donde estamos?

Mendoza, Argentina

Nuestras Redes Sociales