FDA and EMA Provide Guiding Principles for AI in Drug Development

January 27, 2026

On Jan. 14, 2026, the U.S. Food and Drug Administration (FDA) and the European Medicines Agency (EMA) jointly released the “Guiding Principles of Good AI Practice in Drug Development,” a set of 10 high-level principles intended to steer the safe and responsible use of AI across the product lifecycle. While not formal industry guidance, the document provides important insights into FDA and EMA thinking on the deployment of AI during drug and biologic product development and signals future regulatory guidance from both regulators.

The FDA and EMA published these principles regarding AI systems used to generate or analyze evidence in nonclinical, clinical, post-marketing and manufacturing phases for drugs and biologics. The agencies frame the principles as a foundation for future guidance, standards and harmonized regulatory expectations from international regulators, international standards organizations and other collaborative bodies.

The regulators emphasize that AI can accelerate innovation, reduce time-to-market, strengthen pharmacovigilance and decrease reliance on animal testing while maintaining existing standards for quality, safety and efficacy. However, to recognize these benefits, the use of AI during drug and biologic product development should follow the 10 principles. Key ideas on this list include (1) human-centric ethical design; (2) risk-based development, deployment and performance assessments; (3) data governance, document management and cybersecurity; and (4) data quality and life cycle management.

Key Takeaways for Regulated Industry

The principles effectively outline a governance checklist that regulators expect developers to follow, but they stop short of providing concrete, actionable instructions to demonstrate adherence. This leaves developers to interpret how to apply these broad concepts in practice while awaiting more granular recommendations from the agencies.

In this context, companies should develop and reassess their AI governance frameworks focusing on tangible steps, including: (1) establishing a formal, cross-functional governance body; (2) implementing a risk-based approach to categorize AI tools and determine appropriate levels of validation; (3) ensuring robust documentation across the AI lifecycle (e.g., data provenance, model selection and validation reports); and (4) engaging regulators early through pre-submission meetings to align on expectations for novel AI systems. Additionally, sponsors that proactively engage with the FDA in pre-submission meetings regarding expectations and alignment, or, for those systems already in use, reassessment of their current AI governance frameworks with these principles, will be better positioned in regulatory interactions.

McGuireWoods’ Life Sciences Industry Team continues to monitor developments related to FDA regulatory compliance as well as the role of AI in drug, biologic and medical device development. For questions about related topics, contact the authors.

Subscribe