CLS Blue Sky Blog

Skadden Discusses the UK ICO Strategy on AI Governance

Rather than specifically regulating artificial intelligence (AI), the UK government has opted to rely on the existing web of laws and regulations applying to technology across a spectrum of sectors in its jurisdiction. But with this pro-innovation, principles-based approach comes questions that have remained unanswered. Many relate to the application of UK data privacy laws and how AI technology compliance can truly be achieved.

On 30 April 2024, the UK’s Information Commissioner’s Office (ICO) published Regulating AI: the ICO’s Strategic Approach1 (the Strategy), in which it details how the ICO is driving forward the principles in the 2023 AI regulation white paper and the government’s 2024 guidance on implementing those principles.

It is clear that the ICO wants to remain a pragmatic regulator, acknowledging that “data protection law is risk-based” and that risk should be “mitigated” but not “necessarily completely removed.” But whether that approach causes more uncertainty than comfort for industry participants remains an open question.

The Strategy

The Strategy’s most important takeaways include:

The ICO Is Potentially a De Facto AI Regulator

Unlike the European Union, the UK government has not deemed it necessary to appoint an independent regulator to oversee all AI. However, in the Strategy, the ICO notes that many of the principles identified in the AI regulation white paper align with established data protection principles, e.g., the transparency, fairness and accountability principles.

Data protection regulation also applies at all points of technology development, from privacy by design through to deployment and use. Therefore, the ICO is well placed to leverage the privacy principles to oversee unique challenges posed by AI technologies at every stage of the AI life cycle, and thereby potentially be a de facto AI regulator.

The General Approach Remains Pragmatic and Risk-Focused

In the Strategy, the ICO recognises the huge potential benefits of AI but notes the associated inherent risks, many of which derive from how the data — specifically, personal data — is used in the development and deployment of AI systems.

That being said, the ICO makes clear statements in the Strategy that it is not seeking complete compliance of AI with UK data protection laws. It states, for example, that “[d]ata protection law is risk-based” and “[w]e require risks to be mitigated and managed … but not necessarily completely removed.” In assisting businesses in navigating potential risks and mitigants, the ICO references its AI and Data Protection Risk Toolkit.

Many privacy leaders have been grappling with questions around topics such as transparency, purpose limitation and grounds to process in relation to the use of personal data in the development and use of AI. The ICO does not give any specific answers to these questions. It references consultations current and future, including a generative AI consultation series, but from the tone of the Strategy it does not seem likely that new specific AI privacy rules are on the horizon.

The ICO states that “where organisations identify high risk to the rights and freedoms of individuals that … cannot [be] mitigate[d] sufficiently they are required to consult the ICO.” However, “high risk” to “rights and freedoms” seems to be a high bar. The Strategy suggests that the ICO will continue to be a pragmatic, risk-focused regulator, and that if the usual Data Privacy Impact Assessments are conducted and appropriate safeguards put in place, businesses will be free to experiment with and develop AI.

Enforcement Action Will Be Taken, Where Necessary

Despite the above, the ICO has made it clear that it will act to enforce data protection laws, and it has made particular reference to recent fines and an enforcement notice. The enforcement examples it has sited relate to use of facial recognition technology2 and protection of data relating to children — two particularly sensitive areas in data privacy.

The ICO actions make clear that while pro-innovation and pragmatism is supported, certain principles will not be ignored.

AI Is a Key Focus Area for the ICO

AI is one of the ICO’s key focus areas for 2024-25, alongside children’s privacy, ad-tech and online tracking. The Strategy highlights some of the initiatives that the ICO has been undertaking to ensure responsible AI adoption and data protection compliance in the UK, and to enhance its understanding of the technology.

Businesses can expect more consultations, consensual audits and invitations to sandbox projects over the coming months and years. Businesses engaging in higher-risk forms of AI, such as relating to biometric data, are also more likely to hear from the ICO, with potentially an increasing number of requests for information as the ICO learns how the technology works.

ICO initiatives include:

Partnerships and Open Dialogue With Other Regulators Is Essential

In the Strategy, the ICO has emphasised that active engagement and collaboration among regulators is crucial to holistically addressing AI-related challenges. For example, the ICO’s work as part of the Digital Regulation Cooperation Forum (DRCF) brings together the ICO, the Competition and Markets Authority (CMA), Ofcom (the UK’s communications regulator) and the Financial Conduct Authority (FCA).

This forum seeks to establish a unified approach to digital regulation, fostering coherence in oversight and enforcement. Within the DRCF, AI holds a prominent position on the agenda, and collaborative efforts have resulted in the publication of joint perspectives on AI benefits and harms, discussions on emergent technologies like generative AI, and research into the third-party auditing landscape.

The establishment of the “AI and Digital Hub” further enhances this cooperation, providing a platform for innovators to navigate regulatory complexities across multiple sectors. The ICO also spearheads the Regulators and AI Working Group, a platform fostering dialogue among various regulators and public authorities.

The collaborative approach adopted in the UK is notable within the European regulatory landscape. Unlike many jurisdictions where regulators operate independently, the UK stands out for fostering partnerships among various regulatory bodies. This cross-sectoral approach to AI allows for the sharing of expertise and responsibilities across different bodies, facilitating a more comprehensive and coordinated oversight of AI technologies.

Next Steps

The UK government will now review the Strategy along with strategies provided by other regulators4 to determine whether stand-alone or amended AI legislation is required in the UK. Given the responses of regulators to date (see, in particular, our 2 May 2024 client alert “UK Regulators Publish Approaches to AI Regulation in Financial Services”), it seems unlikely the UK government will implement far-reaching AI regulation anytime soon, with perhaps the exception of some specific high-risk and potentially harmful use cases.

Therefore, as the legal framework, guidance and resourcing on AI in the UK develops, is it likely that the ICO will play a central role in setting out key principles, monitoring compliance and implementing enforcement action as the de facto lead regulator for AI.

ENDNOTES

1 The paper was in response to a 15 February 2024 request by the secretary of state for Science, Innovation and Technology.

2 See ICO news releases on actions announced on 23 February 2024 and 23 May 2022.

3 In addition to audits specific to the UK’s General Data Protection Regulation (GDPR), the ICO is also conducting audits related to other legislation, such as the Privacy and Electronic Communications Regulations (PECR), the Network and Information Systems Regulations 2018 (NIS) and areas where information rights overlap with legislation such as the Digital Economy Act (DEA).

4 See strategies by the Prudential Regulation Authority/Bank of EnglandFCAOfcom and CMA.

This post comes to us from Skadden, Arps, Slate, Meagher & Flom LLP. It is based on the firm’s memorandum, “The UK ICO Publishes Its Strategy on AI Governance,” dated May 13, 2024, and available here. Ken D. Kumayama, Alistair Ho, and Imad Mohammed Nazar contributed to the memorandum. 

Exit mobile version