AI systems now regulated | Fieldfisher
Skip to main content
Insight

AI systems now regulated

Locations

France

The AI Act, or legislation on artificial intelligence, has reached a political agreement after 38 hours of discussions. This represents four years of work, 4 years of trying to strike a balance between controlling the risks associated with AI technologies and the development of innovation.

Right up to the end of the discussions, the opponents put forward their fears linked to the fact of regulating in Europe: what would not be regulated or would be less regulated in the rest of the world, and thus of stifling innovation and/or creating a distortion of competition unfavourable to Europe.


In the field of healthcare, AI systems constitute high-risk AI - in addition to medical devices - which will therefore entail the application of specific requirements about the quality of the datasets used, technical documentation and record-keeping, transparency and the provision of information to users, human control, as well as robustness, accuracy and cybersecurity.


1. Documentation and data governance


In this respect, in concrete terms, suppliers will have to ensure that records are kept and that technical documentation containing the information needed to assess the AI system's compliance with the relevant requirements is made available. In particular, this information will have to cover the general characteristics, capabilities and limitations of the system, the algorithms, data and training, testing and validation processes used, and the risk management system put in place. The technical documentation must be kept up to date.


Regarding access to data and documentation, market surveillance authorities will need to have full access to the training, validation and test datasets used by the provider, including through application programming interfaces (APIs) or other appropriate technical means and tools to grant remote access. Where necessary to assess the compliance of the high-risk AI system, and upon reasoned request, market surveillance authorities will even be granted access to the source code of the AI system.


2. Transparency and human supervision


Transparency of suppliers towards users will be required for high-risk AI systems, in order to allow users to interpret the results produced by the system and to use them appropriately, justifying that high-risk AI systems be accompanied by relevant documentation and instructions for use, including concise and clear information, in particular with regard to potential risks, the degree of accuracy and the criteria for measuring accuracy.


This information is fundamental to enable users to exercise human supervision. 


These various principles confirm, at European level, the French provisions resulting from the Bioethics Act of 2 August 2021.


3. Technical robustness and cyber security 


High-risk AI systems will have to produce results of consistent quality throughout their lifecycle and ensure an appropriate level of accuracy, robustness and cybersecurity in line with the state of the art. 


Technical robustness is a key requirement for high-risk AI systems, which will need to be resilient against risks associated with system limitations (e.g. errors, faults, inconsistencies, unexpected situations) as well as malicious actions that could compromise the safety of the AI system and lead to harmful or undesirable behaviour.


Similarly, cyber security will play a crucial role in ensuring the resilience of AI systems against attempts to hijack their use, behaviour, performance or compromise their safety properties by malicious third parties exploiting system vulnerabilities. Cyber-attacks against AI systems can make use of AI-specific resources, such as training datasets (data poisoning) or trained models or exploit vulnerabilities in the digital assets of the AI system or the underlying technology infrastructure.


4. Timetable and penalties


We can be sure that the requirements applicable to medical devices and those defined in France in terms of security for digital health services will give AI health providers a head start on compliance, which must now be achieved within 24 months for high-risk AI systems, bearing in mind that infringements may be subject to administrative fines of up to EUR 30,000,000 or up to 6% of its total worldwide annual turnover in the previous financial year, whichever is greater.

 

Article also published on DSIH.

Sign up to our email digest

Click to subscribe or manage your email preferences.

SUBSCRIBE