UK Government White Paper on AI: “A pro-innovation approach to AI regulation”

On March 29th, the UK Government released a White Paper entitled "A pro-innovation approach to AI regulation", which contrasts sharply with the EU AI Act presently being discussed in the European Parliament. While the EU plans to enforce cross-sector horizontal legislation on AI tools based on their risk classification, the UK Government intends to implement a more "flexible" approach. The UK has established five principles (see Principle-based framework section below) for managing AI risks but will not enforce these principles through legislation. Instead, existing UK regulators will implement the principles on a non-statutory basis as necessary, depending on their domain-specific expertise.

This new UK approach is an example of outcome-based regulation, which grants sectoral regulators the freedom to interpret and apply the principles in a way that does not include overarching prescriptive requirements, such as the demanding Conformance Testing requirements for high-risk AI applications outlined in the EU AI Act. The UK Government acknowledges that AI is a general-purpose technology and that risks must be assessed proportionate to the context in which they are used. For example, a large language model used in AI chatbots for customer service requests in a clothing retail setting will pose different risks than an interactive AI chatbot for medical diagnosis.

The principles outline the fundamental aspects of responsible AI design, development, and use, and will assist businesses in making informed decisions. Regulators will take the lead in implementing the framework by providing guidance on best practices for adhering to these principles. Regulators will be expected to apply the principles proportionally to address AI risks within their jurisdictions in accordance with existing laws and regulations. In this way, the principles will supplement existing regulation, promote clarity, and reduce friction for businesses operating across regulatory domains.

This approach utilises the domain-specific expertise of regulators to customize the implementation of the principles to the specific context in which AI is used. During the initial implementation period, the Government will collaborate with regulators to identify any obstacles to the proportionate application of the principles and evaluate whether the non-statutory framework is achieving the desired outcomes.

The government has opened a public consultation on the proposals set out in the white paper and its AI regulation impact assessment. The consultation closes on 21 June 2023.

1)      AI risks

The White Paper acknowledges the potential risks associated with AI and its ability to amplify discrimination, threaten privacy, and harm our fundamental liberties.

  • Risks to human rights:

    • Generative AI technology can produce deepfake pornography content, which could harm the reputation, dignity, and relationships of the person featured in the video.

  • Risks to safety:

    • An AI assistant that uses LLM technology might recommend a dangerous activity without taking into account the context of the website where the activity was described, potentially causing physical harm to the user.

  •  Risks to fairness:

    • When an AI tool that assesses creditworthiness of loan applicants is trained on incomplete or biased data, it can result in offering loans to individuals based on their race or gender, leading to unfair treatment.

  • Risks to privacy and agency:

    • Smart home devices that collect and store data, including conversations, can build a detailed profile of an individual's personal life, posing a risk to their privacy and agency. If the data is accessed by multiple parties, the risks are further compounded.

  • Risks to societal wellbeing:

    • AI-generated disinformation can undermine people's trust in democratic institutions and processes and limit access to reliable information, posing a risk to societal wellbeing.



2)      Principle based framework

The White Paper outlines the following five principles that UK regulators are to take into account in order to manage AI risks within their supervision remits.

  • Safety, security and robustness

    • AI systems should be reliable, secure and safe throughout their entire lifespan, and risks associated with their use should be identified, evaluated and controlled continuously.

    • Regulators may need to enforce certain measures on the entities they regulate to ensure AI systems function correctly and are technically secure and reliable throughout their lifecycle.

  • Appropriate transparency and explainability

    • AI systems must be transparent and explainable at an appropriate level. This refers to providing relevant information about the AI system to relevant parties, including details on its purpose, usage and timing. Explainability means relevant parties can access, interpret and understand the decision-making processes of an AI system. The degree of transparency and explainability required should be proportional to the risks associated with the AI system.

    • Regulators may need to encourage and support relevant actors throughout the AI lifecycle to implement appropriate transparency measures, such as product labelling, to ensure parties directly affected by the use of the AI system can enforce their rights.

  • Fairness

    • AI systems should not violate the legal rights of individuals or organizations, exhibit unfair discrimination towards individuals or lead to unjust market outcomes. All parties involved in the AI lifecycle should determine the appropriate standards of fairness that align with the specific purpose, results and relevant laws of the system.

    • Regulators may need to create and publish guidelines and examples of fairness that are applicable to AI systems within their regulatory jurisdiction and develop instructions that take into account pertinent laws, regulations, technical standards and assurance techniques.

  • Accountability and governance

    • Effective measures of governance must be implemented to oversee the supply and use of AI systems, with unambiguous accountability established throughout the AI lifecycle.

    • Regulators will be expected to explore strategies to guarantee that clear standards for regulatory compliance and best practices are placed on relevant actors in the AI supply chain. Additionally, they may need to foster the implementation of governance processes that ensure these standards are consistently met.

  • Contestability and redress

    1. Where appropriate, users, impacted third parties and actors in the AI lifecycle should be able to contest an AI decision or outcome that is harmful or creates material risk of harm.

    2. Regulators will be expected to clarify existing routes to contestability and redress and implement proportionate measures to ensure that the outcomes of AI use are contestable where appropriate.

In implementing the new framework the Government expects that regulators will: 
  • Evaluate the principles and apply them to AI use cases that fall within their jurisdiction.

  • Provide guidance on how the principles interact with existing laws and regulations to help businesses comply with them. This guidance should also include examples of what compliance looks like.

  • Collaborate with other regulators to produce clear and consistent guidance for businesses operating in multiple regulatory areas, and issue joint guidance where necessary.

  • Monitor and evaluate their own implementation of the framework and the effectiveness of regulating AI within their jurisdiction.

3)      Central monitoring functions

The government plans to establish central support functions to oversee the implementation of its principle-based approach. These functions will have the following responsibilities:

  • Evaluating and monitoring the effectiveness of the regulatory framework and the implementation of the principles, while ensuring that innovation is supported. This will allow the government to respond to changes in the capabilities of AI and the state of the art by adapting the framework as necessary.

  • Assessing and monitoring the risks posed by AI across the economy.

  • Conducting horizon scanning and gap analysis by working with industry to identify emerging trends in AI technology and informing a coordinated response.

  • Supporting testbeds and sandbox initiatives to assist AI innovators in bringing new technologies to the market.

  • Providing education and awareness to businesses and citizens to help them understand the ongoing development of the framework and their role in it.

  • Promoting interoperability with international regulatory frameworks.

What’s next

Over the next 12 months, we expect that UK regulators will provide practical guidance, tools, and resources, such as risk assessment templates, to outline their plans for implementing the five principles framework in their respective industries. In addition, legislative proposals may be introduced for higher risk areas with systemic implications for consumers and society as a whole, following an initial assessment of the performance of the principle and outcome-based approach to managing AI risks.

With the EU AI Act taking effect throughout Europe, UK businesses offering services to EU residents will need to comply with strict EU regulatory standards. As follows, we will watch to what extent the UK's principle-based approach may become subject to a degree of standardisation of rules due to the "Brussels effect."

Previous
Previous

It’s been a *big* week for crypto regulation…

Next
Next

The FCA is concerned about governance and risk management in payment service firms and EMIs