AI in financial services: considerations for effective risk management

BRAITHWATE IS EXCITED TO ANNOUNCE ITS STRATEGIC COLLABORATION WITH HOLISTIC AI TO HELP FINANCIAL SERVICES FIRMS ADDRESS CHALLENGES POSED BY THE USE OF ARTIFICIAL INTELLIGENCE (AI).

IN THIS BLOG, WE REFLECT ON THE IMPACT OF AI SOLUTIONS IN THE INDUSTRY AND WHAT THIS MEANS FOR TRADITIONAL RISK MANAGEMENT.

The finance industry is built on data. Unsurprisingly, it is one of the biggest adopters of AI technologies, leading to better forecasting, pricing models, and or more targeted products. AI also offers the potential to improve efficiency by automating risk and compliance operations such as Anti-Money Laundering (AML), fraud and market abuse monitoring.

Such opportunities come with challenges. AI is an innovative technology with evolving use cases, but also novel risks, including:

  • Bias: discrimination against certain customer segments based on protected characteristics like ethnicity, broader personal information, or postcode. This can be due to poor parameterisation (of the AI model) and lack of sufficient or representative training data, or, in some cases, because the AI exacerbates real trends in the data – e.g., income rates are lower and crime is higher in certain postcodes so insurance premiums are marked higher based on the “empirical” data.

  • Explainability: lack of ability to interpret and meaningfully explain or understand how and why critical decisions in the customer lifecycle are made (e.g., why a mortgage application is rejected).

  • Privacy: customer data is not processed in compliance with GDPR principles (e.g., on transparency or data minimisation). Lack of internal controls can result in data breaches.

  • Robustness: an AI system may underperform or fail in unexpected circumstances or when under attack, leading to customer harm (e.g., inability to detect fraudulent transactions).

To find out more about these AI risks, please read this article.

More fundamentally, AI impacts the speed and scale at which existing risks can harm consumers, firms, and markets. Firms should adapt their risk management approach accordingly.

We do not treat AI risk in isolation. Whilst there should be a dedicated focus on the new risk dimensions brought by AI, this should work in synergy with sound risk management principles that existed long before. AI risk management solutions will be ineffective unless integrated into existing governance and accountability frameworks.

HOW DOES AI IMPACT FIRMS’ LIABILITY AND REPUTATIONAL RISKS WITH SPILL-OVER EFFECTS ON CONSUMERS AND THE MARKETS?

If an investment manager provides poor financial advice, either through lack of skill or access to information, this will usually impact only a limited number of clients. However, ease of scaling and automation means these bounds are much larger for a model, potentially affecting thousands of customers over a short period.

In the worst case, herd behaviour might cause the underlying value of certain assets to rapidly inflate or deflate with no relation to economic fundamentals, as we saw during the GameStop “meme stock” trading saga in early 2021. Scale has a quality all of its own.

When it comes to AI, scale has a quality all of its own.

Likewise, an overzealous surveillance officer flagging suspicious transaction activity might cause a handful of customer accounts to be frozen or closed erroneously. Without a human in the loop, a faulty or biased AI-based fraud detection model could negatively affect a much larger number of customers.

Providing inappropriate financial advice or denying payments and services to legitimate customers caused by faulty AI models makes firms liable to regulatory sanctions. Consequently, it can lead to a loss of credibility amongst target customers, which may in turn impact a firm’s financial soundness.

In addition to the speed and scale intrinsic to how AI models are adopted across customer-facing and back-office functions, the deployment of AI brings a further layer of complexity when it comes to operational risk scenarios. Given that AI models may be used interconnectedly and are often updated frequently and deployed across large legacy systems, risk event identification and root cause analysis become more complex and lengthier processes.

Malfunctioning and/or unfair AI models can pose harm to customers, firms and the wider markets integrity

AI RISK MANAGEMENT SHOULD BE CONSIDERED HOLISTICALLY

Just like a company’s data is not only the responsibility of the Data Protection Officer (DPO), AI is not just a technical problem falling exclusively under CTO/CISO management oversight.

Firms in the financial services industry are used to the three lines of defence model. The 1st line is represented by business areas, the 2nd line by independent risk and compliance functions and the 3rd line by Internal Audit.

When it comes to AI risk and allocation of responsibilities across 3LoD, we recommend firms consider:

  • Increase in AI and data science skills across 1st line and 2nd line. Lack of technical expertise will impede 1st line live monitoring and the 2nd line’s ability to QA and challenge.

  • Ongoing training and collaboration between the 1st and the 2nd lines with respect to the models adopted by the firm and industry standards developments

  • Depending on the firm’s scale, there should be a clear mapping of the lines of accountability across functional senior managers and Board levels (e.g. centralised model of a single Chief AI Officer vs decentralised model where AI responsibility is allocated across CTO, CDO and Head of Risk/Compliance).

In the context of the UK regulatory regime, the Senior Managers & Certification Framework (SM&CR) provides a useful concept of “reasonable steps” that Senior Managers should take in order to discharge their risk management responsibilities. For AI risk management, either via a centralised or decentralised responsibility model, reasonable steps should include:

  • AI governance policy

  • AI ethics framework

  • AI risk management training

  • AI control framework

  • Clear allocation of responsibility and oversight for AI

At Braithwate and Holistic AI, we have a combined set of deep expertise in financial services risk management and AI risk management and auditing. Together we can help your firm:

  • Assess the impact of AI risk on your business operations

  • Design an inventory of AI-powered applications (internal and outsourced)

  • Assess the adequacy of your current governance framework for AI

  • Review the effectiveness of your existing risk and control framework to address novel AI risks

  • Design and implement new or enhanced policies and procedures to manage AI risk

  • Provide AI ethics and risk management training

PLEASE REACH OUT TO ANNA (ANNA.NICOLIS@BRAITHWATE.COM) OR ADRIANO (ADRIANO.KOSHIYAMA@HOLISTICAI.COM) IF YOU WANT TO CHAT ABOUT ANY OF THE ABOVE.

James Nicholls

Managing Director at Braithwate - specialist advisors in financial services. We help our clients develop effective strategies, launch new business models, manage risk, comply with regulatory requirements and execute transformational change initiatives. Our expert consultants - based in New York, London and San Francisco - serve both the traditional financial services sector (banks, broker-dealers, insurance companies) as well as FinTech and RegTech firms.

https://www.braithwate.com
Previous
Previous

The emerging clarity in digital asset regulation

Next
Next

Biden’s EO on Ensuring Responsible Development of Digital Assets says nothing and everything