The UK government has now published its much-anticipated White Paper detailing its new approach to regulating Artificial Intelligence (AI) in the UK, the aim of which is ‘to guide the use of artificial intelligence in the UK, to drive responsible innovation and maintain public trust in this revolutionary technology'.
Interestingly, the government has neither proposed introducing a new regulator nor decided to introduce any associated legislation. Instead, the White Paper sets out the following five principles that will empower existing regulators and should be considered alongside existing laws to 'best facilitate the safe and innovative use of AI':
- Safety, security and robustness: applications of AI should function in a secure, safe and robust way where risks are carefully managed
- Transparency and explainability: organisations developing and deploying AI should be able to communicate when and how it is used and explain a system’s decision-making process in an appropriate level of detail that matches the risks posed by the use of AI
- Fairness: AI should be used in a way which complies with the UK’s existing laws, for example the Equality Act 2010 or UK GDPR, and must not discriminate against individuals or create unfair commercial outcomes
- Accountability and governance: measures are needed to ensure there is appropriate oversight of the way AI is being used and clear accountability for the outcomes
- Contestability and redress: people need to have clear routes to dispute harmful outcomes or decisions generated by AI
The approach is clearly one of pro-innovation, but some commentators are criticising the government’s ‘light-touch’ approach when compared to measures being taken by the EU. For example, the European Commission’s proposals for an Artificial Intelligence Act (the world's first AI regulation) focus on a risk-based framework that includes up-front prohibitions on certain users of AI, with a small number of high-risk use cases being considered too dangerous to people’s safety or EU citizens’ fundamental rights, whilst self-regulation is proposed for low-risk uses. Examples of high-risk systems include government so-called 'social scoring' and real-time biometric identification systems in public spaces, whereas low-risk uses include systems for spam filters and video games. The regulation is expected to come into force this year alongside the EU’s proposed liability regime to govern claims for damage caused by AI systems.
Such a contrasting approach has led many to consider that the UK’s lack of detailed and unified regulation could cause issues. Indeed, how might people's personal data be protected on an international level if different laws and rules are being introduced in the EU, as well as China and the US? However, there are benefits, some say, to the UK's trail-blazing approach as it leaves greater scope for flexibility and avoids over-legislating, but it does put the UK at odds with what other countries are doing in the AI sphere.
In any event, the government has launched a consultation seeking views on the proposals set out in the White Paper, which is open until 21 June 2023.
Fieldfisher recognises this hugely important and rapidly growing area and we will continue to monitor any developments and keep clients updated. In the meantime, Please do get in touch if you have any questions about AI and its impact on IP. We will also be publishing a blog on this shortly. Please also see our previous blog on the UKIPO's response to the last consultation on IP and AI and how the UK's copyright and patent system should deal with the emerging technology - UK policy change for AI and IP and a more recent update on the latest proposals for the text and data mining exception - Proposed new copyright law exception on text and data mining scrapped.
Sign up to our email digest