Singapore’s Generative AI Model Governance Framework: impacts on the Financial Services Industry

Model AI Governance Framework for Generative AI  

Singapore’s Model AI Governance Framework for Generative AI (the “Framework”) stems from a collaborative initiative between the Infocomm Media Development Authority (“IMDA”) and AI Verify Foundation (“AIVF”). Comments - particularly from the international community - are welcome until 15 March 2024. The final version is expected in mid-2024.

Initial Framework

Singapore launched its first Model Governance Framework on traditional AI in 2019, later revising it in 2020. Updating the framework again is necessary amid emerging concerns and threat scenarios surrounding the use and development of generative AI. The main aim of the new Framework is to promote public understanding and trust in technologies enabling end-users to access generative AI confidently and safely.

Traditional AI models aid decision making by making predictions or recommendations based on pattern recognition. Specific tasks are solved with predefined rules (e.g. differentiate between images of horses v donkeys).

In the financial services industry such models are used for predictive analytics (i.e. analysing credit histories to predict loan default or using market data to recommend investment strategies).

Generative AI models focus on pattern creation and are able to create new and original content (such as literature, audio, chat responses, designs) through learned patterns (e.g. create new images of horses).

In the financial services industry such models can be used for diverse applications (i.e. from designing personalised financial planning to launching chatbots and virtual assistants to handle customer enquiries).

Although generative AI has the ability to transform all areas of our lives its adoption potentially produces a series of new risks, which include plagiarism, and misinformation.

The new Framework addresses some of the common risks specified in the existing Model Governance Framework, and also identifies additional ones such as mistakes and “hallucinations”, privacy and confidentiality and copyright infringement.

Consultation process

Ensuring trusted AI ecosystems is vital as Singapore continues to advance its digital economy. The consultation process of the Framework adopts a novel approach by soliciting comments from the international community.

The potential misuse of AI to cause harm (e.g. scams, cyberattack and misinformation) is a global problem, and achieving global consensus on policy approaches has been identified as a challenge. The Framework therefore seeks to foster greater collaboration by sharing ideas and practical pathways with the ultimate aim of providing a common baseline for understanding among different jurisdictions.

IMDA also acknowledges that some ideas set out in the Framework are not unique. With that in mind, the Framework is promoted as a space to “work closely with a coalition of like-minded jurisdictions, industry partners and researchers towards a common global platform and better governance frameworks for generative AI.”

Risk mitigation v market innovation

Although there has been pressure to regulate AI, Singapore currently does not intend to implement AI regulation. Its preferred approach is to first develop technical tools, standards and technology to support regulatory implementation.

The new Framework is therefore seen as a balance between risk mitigation and market innovation.

Guidelines based AI governance in context

Globally, two key regulatory approaches to AI governance are emerging — on one side, some jurisdictions (e.g. the EU, Canada, and China) are mandating strict standards, and on the other are those favoring a more flexible, guideline-based strategy, (e.g. Singapore, the UK, and Japan)[1].

Singapore's Framework, which is sector-agnostic and guideline-based, exemplifies this second approach. It aims to harmonise the management of Gen-AI risks whilst maintaining an environment that fosters technological innovation.

Summary of the proposed Model AI Governance Framework for Generative AI

How will Singapore’s Model AI Governance framework impact the FS industry?

The Framework's principles largely reflect existing norms in the financial services (“FS”) sector. Accountability, data protection, and trusted risk management practices such as incident reporting, testing, and security are not new; they are embedded in the sector's regulatory fabric and operational risk management practices. Yet, these principles gain fresh relevance in the context of AI and Gen-AI in particular.

AI influences the scale and speed at which existing operational risks may materialise. For example, an AI-driven trading platform might execute thousands of transactions in seconds based on a flawed algorithm, potentially leading to substantial financial losses before human operator intervention. Similarly, a robo-adviser could, due to a programming error, give suboptimal portfolio allocations to thousands of customers simultaneously, affecting their investments adversely and eroding trust in automated financial advice systems.

The complexity of these use cases escalates when AI models are sourced from third parties. Where does accountability lie in such scenarios, and how should the responsibility for implementing controls be apportioned? Is there a case for shared accountability between technology companies providing the AI model and the financial services firms that use them?

All firms, from emerging fintechs to established banks, will need to address such issues before they integrate AI more deeply into their strategic operations.

Interplay between existing regulatory outcomes, industry standards and new AI guidance

AI-related risk considerations impact the outcomes linked to existing financial services regulatory frameworks. For instance, regarding the principle of accountability, the industry benefits from frameworks like the Senior Managers Regime in Singapore and the UK[2] [3].

The respective regimes mandate that Senior Managers at financial institutions are expected to undertake "reasonable steps" to manage the most significant risks associated with their respective business areas and become personally accountable for them.

However, the practical application of such principles is not straightforward. Company data is not only the responsibility of a CISO/CTO – and this is already an accepted principle. Similarly, the accountability and oversight related to AI should not sit with a single Senior Manager.

For firms navigating the challenges of AI governance, it is beneficial to consider a collaborative approach that draws on diverse perspectives from across the organisation. Prescribing a one-size-fits-all top down solution is not desirable. The preferred approach is to draw the departments together by allocating shared risk management responsibilities.

Fostering a deeper understanding of AI throughout the company is also desirable. As teams become more knowledgeable about AI's implications for business processes, they can contribute more effectively to the company’s risk management processes and adapt to the evolving regulatory landscape.

Such growth in AI awareness is less about strict compliance and more about nurturing a risk-aware culture that can integrate AI in ways that support the firm's broader objectives, whilst mitigating negative consequences.

 Conducting a thorough assessment is also vital. By assessing how principles from frameworks like Singapore's Gen-AI governance could affect their operations, firms can evaluate the adequacy of their existing governance structures vis a vis deployment of AI models. Such assessments will help address novel AI risks such as explainability and data bias. Understand how AI impacts traditional operational risks, including data security and technical robustness is a further benefit.

 AI oversight: Key questions

Below we set out a list of recommended initial questions that Board members and teams more broadly should consider to evaluate their firm’s readiness to address increased operational and regulatory risk related to AI.

●  Due Diligence: how well do our current due diligence processes evaluate the risks and ethical considerations associated with AI, and are these processes in alignment with our strategic objectives and risk appetite?

●  Regulatory Compliance: are we prepared to meet the regulatory expectations that govern the use of AI in our sector, and do we have the capability to adapt to regulatory changes in a timely manner?

●  Risk Management: does our risk management infrastructure adequately identify, assess, and mitigate the potential risks introduced by AI, including operational, reputational, and cybersecurity risks?

●  Accountability: have we established clear governance structures that delineate accountability for AI-related decisions, be it in-house or outsourced models?

●  Transparency: how are we engaging with stakeholders, including customers, regulators, and partners, to ensure transparency and accountability in our use of AI, and do our communication strategies effectively address their concerns and expectations?

How can we help?

We strive to combine our leading solutions and robust partner ecosystems to help business leaders to navigate the increasing complexities of AI governance.

Contact us to find out how we can help you.

About the Authors

Claire Wilson is a Partner at HM, based in Singapore. She provides support to innovative technology firms and FinTechs on governance, compliance and regulatory strategy. Contact Claire at claire.wilson@hmstrategy.com

Anna Nicolis is a Director at Braithwate based in London, UK specialising in risk management and regulatory strategy. Contact Anna at anna.nicolis@braithwate.com

The authors wish to thank Michelle Goh and Aletta Rizni for their contributions to this article.

Sources

[1] https://iapp.org/media/pdf/resource_center/global_ai_legislation_tracker.pdf

[2]https://www.fca.org.uk/firms/senior-managers-certification-regime#:~:text=The%20Senior%20Managers%20and%20Certification,it%20applies%20to%20your%20firm.

[3]https://www.mas.gov.sg/regulation/guidelines/guidelines-on-individual-accountability-and-conduct

 

Previous
Previous

FCA Dear CEO letter to Annex I firms (non-bank lenders) on financial crime controls

Next
Next

UK Car Finance Discretionary Commission Scandal: A Guide For Car Dealers, Car Finance Firms and Motor Finance Credit Brokers