Artificial Intelligence in healthcare, qualification of devices and obligations of players | Fieldfisher
Skip to main content

Artificial Intelligence in healthcare, qualification of devices and obligations of players



Artificial Intelligence (AI) has been used in the healthcare sector for many years, particularly in radiology, but its use cases and applications have multiplied exponentially in recent months, relying on generative AI. The ability of AI systems to process, analyse and interpret huge amounts of data is crucial in this sector, which is generating more and more complex data.

It is in this context that, on March 13, 2024, the European Parliament adopted the Artificial Intelligence Regulation (AI Act) establishing a uniform legal framework for the development, marketing, and use of AI systems in line with the values of the European Union.

Ambitions and scope of the AI Act

The text aims to promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, and fundamental rights against the harmful effects of AI systems in the Union. It requires all stakeholders along the AI value chain to be in control of the AI system, whether it is during the development phase, the use or the interpretation of results, guaranteeing transparency on the capabilities and limitations of the AI system.

The broad definition of AI systems and the graduation of applicable rules and requirements, depending on the nature and level of risk of the systems, ensure both legal certainty and the flexibility needed to accommodate technological developments. The Act covers all relevant operators as well as the potential use of AI systems and/or their results on EU territory or with effects in the EU, thus ensuring international convergence.

The AI Act will come into force 20 days after its publication, and will then apply progressively, depending on the level of risk: 6 months for the provisions relating to prohibited practices, 12 months for those concerning general-purpose AI models, and finally 36 months for the requirements applicable to high-risk systems.

Qualification of healthcare AI systems

High-risk AI systems can only be marketed, commissioned, or used if they meet specific requirements.

AI systems intended to participate in the diagnosis, prevention, monitoring, prediction, prognosis, treatment, or mitigation of a disease constitute high-risk AI.

Indeed, AI systems intended for use in medical applications, class IIa or higher, constitute high-risk AI.

Similarly, high-risk AI systems include those intended to evaluate emergency calls by natural persons or to be used to dispatch, or establish priority in the dispatching of, emergency first response services, as well as of emergency healthcare patient triage systems, except those that do not materially influence the outcome of decision making, such as systems designed (i) to perform a narrow procedural task (such as transforming unstructured data into structured data, categorizing incoming documents or detecting duplicates among a large number of applications), or (ii) to improve the result of a previously completed human activity (the AI system adds only an additional layer to a human activity : this condition would apply, for example, to AI systems designed to improve the way a document is written, to give it an academic style) or (iii) to detect decision making patterns or deviations from prior decision making patterns without replacing or influencing the previously human assessment carried out without proper human review, or (iv) to perform a preparatory task prior to a medical procedure (the risk would be reduced, as the AI system is used after a human assessment has been carried out, and is not intended to replace or influence it, without proper human review).

In order to ensure traceability and transparency, a provider who considers an AI system not to be high-risk, on the basis of these conditions, is required to document the assessment before the system is placed on the market or put into service, and to provide this documentation to the national competent authorities upon request.

The Commission will develop guidelines listing practical examples of high-risk and non-high risk use cases for AI systems.

High-risk AI and requirements for providers and deployers.

High-risk AI systems are subject to compliance with specific requirements, particularly in terms of quality management, traceability, conformity assessment, quality of datasets used, transparency and provision of information, human control, cooperation with competent authorities, robustness, accuracy, and cybersecurity. Requirements should apply to high-risk AI systems regarding risk management, quality and suitability of datasets used, technical documentation and record-keeping, transparency, and provision of information to deployers, human control, as well as robustness, accuracy, and security. These requirements are necessary to effectively mitigate the risks to health, safety, and fundamental rights.

While providers are primarily concerned by these requirements, so are all stakeholders involved in the AI system value chain, including the “deployer”, defined as “using a high-risk AI system under its own authority”.

Any deployer of high-risk AI system, whether public or private, providing healthcare - a healthcare institution - must perform an impact assessment relating to the use of the system before putting it into service, in order to (i) identify the specific risks to people's rights and (ii) determine the measures to be taken should these risks materialize. The impact assessment must identify the relevant processes of the deployer in which the high-risk AI system will be used in line with its intended purpose, must indicate the period of time within, and the frequency with which, the high-risk AI system is intended to be used, as well as the categories of natural persons and groups likely to be affected by its use in the specific context.  In addition, deployers must take appropriate technical and organizational measures to ensure that systems are used in accordance with the instruction of use, to ensure that users are able to exercise human control, to ensure that input data is relevant and representative of the system's intended purpose, to monitor its operation, to inform the provider of any risks, and to ensure that automatically generated logs are kept.

Finally, of course, the AI Act establishes human control to be implemented by the user during the period of use of high-risk AI system, thus ensuring end-to-end control of the AI system.

These are all guarantees of human-centric technology, for the benefit of the healthcare system, users and, above all, patients...

Article also published on SIH Solutions.