Pathways to AI Validation

Back to Newsroom

Compliance Approaches for Using AI in the Regulated Pharmaceutical Environment

The use of artificial intelligence (AI) is playing an increasingly important role in the pharmaceutical industry. Since 2015, investments in AI have increased 27-fold. This development affects not only drug research but also the production process. For example, AI systems are already being used to monitor production parameters or to replace one set of eyes in the four-eyes principle for optical quality control.

However, these potentials face significant uncertainties in regulated environments, particularly in the validation of computer-based systems. Traditional validation methods, such as the V-Model, are no longer suitable for AI systems. Clear regulatory guidelines are either lacking or still in development. Both authorities and companies face the challenge of finding new approaches for the reliable use of AI in regulated environments.

 

GxP Relevance Determines Regulatory Requirements

The degree of applicable regulatory requirements depends on where and how AI solutions are used: Therefore, the area of application of the planned AI solution (intended use) should first be defined by describing the specific challenges or problems that are to be solved through the use of ML/AI.

From a regulatory perspective, two points are crucial here: firstly, the question of whether the process supported by the ML/AI system directly or indirectly influences the quality, safety or efficacy of medicinal products or medical devices - and is therefore to be classified as GxP-critical. On the other hand, it must be clarified which control instances monitor the ML/AI systems or which control instances replace or supplement the ML/AI systems. ML/AI systems can automate or improve certain human monitoring functions, but should not be seen as a substitute for human responsibility.

Depending on the classification, there are different regulatory requirements, e.g. with regard to the validation obligation. The following basic types can be distinguished:

 

Schema zur Bewertung der Validierungspflicht für AI ML Anwendungen im Pharma Umfeld

 

 

 

 

 

 

 

 

 

 

 

 

 

Securing ML/AI Systems in Regulated Environments

Today, in the GxP environment, there are almost exclusively systems that are secured by additional, human, control instances or downstream approval processes and are therefore not subject to validation. Conversely, there is currently no standardized procedure for implementing AI solutions that require validation in GxP-relevant processes due to a lack of guidelines. In both cases, however, an approach that takes the following steps into account is recommended:

 

1. Defining Acceptance Criteria

Acceptance criteria form the basis for the validation process by providing objective and measurable criteria that can be used to assess whether the system works according to the specific requirements of the planned application and always delivers the same result under the same conditions. For AI systems that are intended to deliver non-deterministic results, the acceptance criterion cannot be fixed, but should be measurable but dynamic (e.g:“for visual inspections, the AI must deliver a better result (false rejection/acceptance rate) than the classic system or the human"). In the context of AI validation, statistical considerations of “effectiveness and side effects” are increasingly taking the place of binary statements. This could lead to synergies between traditional drug research and AI validation.

2. Developing ML/AI SOPs

The creation of Standard Operating Procedures (SOP) is essential for the safe operation of ML/AI systems.They provide binding descriptions of the necessary processes and procedures and define responsibilities and procedures, including the review of results and their documentation. The specific requirements and characteristics of the respective AI system must be taken into account and correspondingly adapted SOPs developed. The following definitions should be included:

  • Responsibilities for implementation and operation
    The SOPs should clearly define which persons or teams are responsible for the implementation and operation of the ML/AI systems. This includes the assignment of tasks and responsibilities to ensure that all aspects of the ML/AI systems are properly addressed.
  • Type and frequency of monitoring
    To ensure that ML/AI systems function properly and deliver the desired results, the type and frequency of monitoring must be defined. This may include regular review of algorithms, performance metrics and data integrity.
  • Risk classification and mitigation
    In addition, the SOPs should also include a risk classification of the ML/AI systems and define appropriate risk mitigation measures. This could include a classification of risks into different categories, such as risks to patient safety or data integrity. The basis for this is a detailed risk analysis (see next point).

 

3. Conducting a Detailed Risk Analysis

Every ML/AI implementation harbors potential risks that must be carefully assessed.

The following questions are at the forefront:

  • What level of risk is acceptable?
    As a first step, companies should define their risk tolerance by determining what level of risk is acceptable in relation to the ML/AI systems in question and what risks need to be avoided or reduced. Transparent communication about this risk tolerance between those responsible for the AI system, those responsible for quality and management is crucial to clarify expectations and ensure appropriate risk management.
  • What AI autonomy does the process allow?
    The autonomy of the ML/AI system, i.e. the degree of self-control and decision-making, must be clearly defined. Depending on the area of application and risk tolerance, ML/AI systems can be more or less autonomous. It is important that the underlying algorithms and decision-making processes are transparent and understood by the stakeholders.
  • What potential sources of error influence the safety of the ML/AI system?
    A Failure Mode and Effects Analysis (FMEA) can be helpful in assessing the risks associated with the maturity of the ML/AI system, its autonomy and data quality. By identifying potential failure modes, their effects and causes, appropriate risk mitigation measures can be developed. These include implementing quality controls, regularly reviewing and updating algorithms and ensuring continuous improvement of data quality (see next point).

A thorough risk analysis that considers these aspects will enable organizations to understand the potential risks of ML/AI implementations and take appropriate mitigation measures. Although the risks are essentially the same as for conventional systems, the technical causes are difficult or impossible to assess. This must be taken into account in the assessment.



4. Ensuring Data Quality

The quality and representativeness of the training data are crucial for the performance and robustness of the ML/AI model.

To ensure a realistic representation of the application domain, measures should therefore be defined to ensure that the training data is sufficiently large, diverse and free from bias. This requires a thorough analysis and cleansing of the data as well as the selection of suitable metrics to assess data quality.

The validation of the training data should be carried out in a structured, documented and comprehensible process. This includes describing the data collection methods, checking the data for errors or inaccuracies, documenting the data processing steps and analyzing the data distributions and statistics. In addition, validation tests should be performed to evaluate the performance of the ML/AI model on the training data and ensure that it delivers the expected results.

 

Further challenges in the context of uncontrolled systems:

In addition to the general steps described to ensure the secure and compliant operation of ML/AI systems, there are particular challenges when dealing with uncontrolled systems (see Fig. 1): On the one hand, this concerns the cloud systems used, which can be affected by changes to the hyperscalers and thus impair the performance, security or functionality of the AI solution. Companies must therefore continuously monitor their systems in order to identify and control the undesirable effects of such changes.

On the other hand, self-learning systems are diametrically opposed to the requirement for computer-aided systems in the GxP environment to produce transparent, traceable and reproducible results. This is because the constant further development of their own model can lead to calculations with the same input data turning out differently over time. This non-deterministic behavior makes it difficult to apply a classic validation approach based on static acceptance criteria. Against this backdrop, companies need to find new mechanisms to detect and correct undesirable behavior as soon as it occurs. Currently available AI/ML solutions circumvent this problem by using static systems in which the learning process of the system is already completed before productive use.

 

Conclusion

The use of AI in the regulated environment of the pharmaceutical industry offers great potential for innovation and increased efficiency, but at the same time places high demands on the safe and compliant operation of the systems. In particular, the implementation of autonomous systems is currently still causing companies major difficulties, as there are currently no guidelines for the validation of corresponding solutions. Nevertheless, companies should already be looking at the requirements and possibilities for validating AI systems. This is because the established regulations are in a state of flux - authorities such as the FDA are calling on the industry to work closely together. Those who take action early on can secure competitive advantages.

To achieve this, companies must first understand which regulatory requirements affect them and then take appropriate measures to ensure the safety of AI systems. A systematic approach based on the principles of computer system validation forms the ideal basis for this.

Authors

msg Sabine Komus

Sabine Komus

Senior Manager Governance, Risk & Compliance

msg Dr. Hans Klöcker

Dr. Hans Klöcker

Manager Governance, Risk & Compliance

Contact

msg industry advisors ag
Robert-​Bürkle-Straße 1
85737 Ismaning
Germany

+49 89 96 10 11 300
+49 89 96 10 11 040

info@msg-​advisors.com

About msg group

msg industry advisors are part of msg, an independent, internationally active group of autonomous companies with more than 10.000 employees.

 

Select your language