The implications of biased AI models on the financial services industry

The meteoric increase of the use of Artificial Intelligence (AI) across all sectors is impossible to ignore. Whatever your feelings about AI, the use of these models is with us to stay and used in the right way it creates opportunities for businesses to improve efficiencies, reduce costs whilst at the same time streamline (and hopefully) improve customer experience.

However, the use of AI in financial services brings with it additional challenges that require careful consideration. One of those challenges is the risk that the AI models used are based on unfair bias resulting in discrimination against certain population groups. This is bad for the customers and brings potential legal risk to businesses using such AI models.

How can AI models be biased?

The stages of development of an AI model (or more specifically the algorithm that drives the model) helps understand the potential risks of bias. At a high level, the AI algorithm presents a list of instructions to the computer that allows it to create an output based on the data that it has access to.

 Bias can be introduced in several ways. It could be that the algorithm itself creates bias because of way it is implemented by giving certain data more importance, thereby skewing results. It could also be introduced by humans in choosing which datasets should be included or excluded in the AI mode. However, the most opportune opportunity for bias to emerge is the content of the datasets used.

 AI models are only as good as the data they are trained with. Datasets can be obtained in several ways. It may be that a business uses its own historic data to create a dataset – for instance, all historic insurance claims information to improve risk assessments and premium estimates. Alternatively, datasets may be purchased from third parties who have collated data by interrogating and scraping the internet or other sources. In both scenarios there is a risk for such data to be biased as the data used will often reflect society’s inequalities.

Más contenido para leer:  ShapesXR draws $8.6m in bid to become XR prototyping industry standard

 For instance, the data might give preference to white men over women or minority ethnic groups. If the dataset is trained on historical data that is biased against minority ethnic groups, it could lead to discriminatory pricing. A well-documented example of this bias outside of the insurance sector was Amazon’s recruitment AI model that was found to be discriminatory against women because it was trained on historic Amazon employment information.

 How can we stop the bias in AI models?

There are several ways businesses can reduce the risk of bias creeping into AI models. None are fool proof but trying to put in place safeguards holistically can reduce the risks.

 Datasets need to be transparent to allow them to be analysed for bias. For example, it must be clear how and when the data was sourced and labelled. Critically, it will be important to consider the make-up of the individuals that form part of any dataset. Such information will enable a determination as to how appropriate the dataset is for the purpose of the AI model and whether there might be any discriminatory skew.

 This is not always a straight-forward exercise as it will depend on the size of the dataset and whether the dataset is home-grown or purchased from a third-party. In relation to the latter, it will be important to understand the terms and conditions of use of any purchased dataset to clarify where risk and liability lies in the event that the dataset gives rise to legal issues in the future.

Más contenido para leer:  ¿Puede el plan de IA de SAP Labs acelerar la migración a la nube?

 It is also important to continuously monitor and assess the AI model. There is a definite place in the development of AI models for humans. It is important for businesses to stress-test their AI models to ensure their goal is being delivered without bias. This means having an internal audit team that understands the AI model, the datasets used and the on-going data that will be added to the AI model in real time to ensure that the outputs are fair and non-discriminatory. There must be escalation policies in place within businesses to ensure that any required changes to AI models can be made effectively and quickly. This is particularly important to enable inevitable interactions with regulators who are increasingly interested to ensure that AI models are transparent and working for the benefit of customers.

 Risks of biased AI models in the insurance sector

There is the potential damage caused to individuals or businesses that have entered into insurance contracts as a result of discriminatory practices in AI models. In the UK, this might lead individuals to seek redress via the financial ombudsman or through the courts.

 We are already seeing a proliferation of copyright cases by individuals against AI giants such as Stability AI and Open AI. In the US, these are class action lawsuits and it is possible that if an AI model in the insurance sector was to affect a large swathe of individuals from a discriminatory perspective, similar lawsuits might follow.

 The other risk to consider is the rise of the regulatory breach.

Más contenido para leer:  La era 5G-Advanced llega cuando Asia eleva el listón del 5G

 As AI is developing, so to the regulatory landscape is developing. There has been much interest in the EU’s AI Act which is likely to be finalised shortly (although it will not be legally in force for at least 2 years), the goal of which is neatly summarised on the European Parliament’s website as “AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly. AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes”.

 The UK is also looking at regulating AI outside of the EU’s AI Act. It is putting forward a framework that is pro-AI but non-discrimination is still a central plank in the 2023 White Paper. This is demonstrated by the five principles to guide and inform the responsible development and use of AI in all sectors of the economy, being (i) safety, security and robustness; (ii) appropriate transparency and explainability; (iii) fairness; (iv) accountability and governance; and (v) contestability and redress.

 Add into the mix the GDPR in relation to personal information used in AI models and cybersecurity rules, means that  non-compliance in AI models could be very costly indeed from a regulatory perspective.

 The rise of AI models in the insurance sector is set to continue but AI models that result in outputs that are biased pose a genuine risk from both a legal and reputational perspective. Policies and procedures should be put in place to ensure the AI models are fit for purpose and, therefore, mitigating costly legal challenges as much as possible.

 

Jamie Rowlands is a partner at Haseltine Lake Kempner. 

Nuestro objetivo fué el mismo desde 2004, unir personas y ayudarlas en sus acciones online, siempre gratis, eficiente y sobre todo fácil!

¿Donde estamos?

Mendoza, Argentina

Nuestras Redes Sociales