Compliance Approaches for Using AI in the Regulated Pharmaceutical Environment
The use of artificial intelligence (AI) is playing an increasingly important role in the pharmaceutical industry. Since 2015, investments in AI have increased 27-fold. This development affects not only drug research but also the production process. For example, AI systems are already being used to monitor production parameters or to replace one set of eyes in the four-eyes principle for optical quality control.
However, these potentials face significant uncertainties in regulated environments, particularly in the validation of computer-based systems. Traditional validation methods, such as the V-Model, are no longer suitable for AI systems. Clear regulatory guidelines are either lacking or still in development. Both authorities and companies face the challenge of finding new approaches for the reliable use of AI in regulated environments.
GxP Relevance Determines Regulatory Requirements
The degree of applicable regulatory requirements depends on where and how AI solutions are used. Therefore, the intended use of the planned AI solution must first be defined by describing the specific challenges or problems that ML/AI is intended to solve.
From a regulatory perspective, two key points are decisive: Whether the ML/AI-supported process directly or indirectly impacts the quality, safety, or efficacy of drugs or medical devices—making it GxP-critical. Which control mechanisms oversee, supplement, or replace ML/AI systems. AI systems can automate or enhance human oversight but should not replace human responsibility.
Depending on the classification, different regulatory requirements, such as the need for validation, may arise. The following categories can be distinguished:
Securing ML/AI Systems in Regulated Environments
In the GxP environment today, systems are predominantly safeguarded by additional human control mechanisms or downstream approval processes, making validation unnecessary. Conversely, there is currently no standardized implementation method for AI systems requiring validation in GxP-critical processes. In both cases, a structured approach should include the following steps:
1. Defining Acceptance Criteria
Acceptance criteria provide the foundation for the validation process by setting objective and measurable benchmarks to evaluate whether the system meets its intended use. For AI systems producing non-deterministic results, acceptance criteria cannot be fixed but must remain measurable and dynamic. For example: "In optical inspections, AI must achieve better results (lower false rejection/acceptance rates) than traditional systems or human operators." Increasingly, statistical analyses of “effectiveness and side effects” replace binary conclusions, creating synergies between classical drug research and AI validation.
2. Developing ML/AI SOPs
Standard Operating Procedures (SOPs) are essential for the safe operation of ML/AI systems. SOPs must include:
- Responsibilities for implementation and operation, assigning clear roles and tasks.
- Monitoring frequency and methods, including regular checks of algorithms, performance metrics, and data integrity.
- Risk classification and mitigation through detailed risk analysis and measures to manage risks affecting patient safety or data integrity.
3. Conducting a Detailed Risk Analysis
Each ML/AI implementation carries potential risks that must be assessed systematically. Key questions include:
- What risks are acceptable? Define the risk tolerance level and ensure transparency among stakeholders.
- What level of AI autonomy is permissible? Determine the degree of self-governance AI systems may have, ensuring transparency in decision-making processes.
- What potential errors could impact system safety? Use Failure Mode and Effects Analysis (FMEA) to identify failure modes and implement risk mitigation strategies.
4. Ensuring Data Quality
The performance and robustness of AI models depend on the quality and representativeness of training data. Measures to ensure data quality include:
- Thorough data analysis and cleansing to remove errors and biases.
- Structured validation processes to document data collection, processing steps, and quality metrics.
- Validation tests to evaluate model performance and ensure expected outcomes are achieved.
- Challenges with Uncontrolled Systems
Additional challenges arise with uncontrolled systems, such as:
- Cloud systems that can be affected by changes from hyperscalers, impacting performance, security, or functionality.
- Self-learning systems that continually evolve, conflicting with GxP requirements for transparency, traceability, and reproducibility.
- Currently, available AI solutions address this by using static systems, where the learning process is completed before deployment.
Conclusion
The use of AI in the regulated pharmaceutical environment offers immense potential for innovation and efficiency gains but also imposes high demands on the safe and compliant operation of systems. The lack of guidelines for validating autonomous systems presents a major challenge, but companies should proactively address these requirements now. Regulatory frameworks are evolving—authorities such as the FDA are encouraging industry collaboration. Early action can provide competitive advantages.
To navigate this transition, companies must first understand which regulatory requirements apply to them and adopt measures to ensure AI system safety. A systematic approach, grounded in principles of computer system validation, provides the ideal foundation for AI compliance and operational success.