The artificial intelligence conversation has reached a fever pitch across industries, and life science manufacturing is no exception. For GMP-regulated manufacturers operating under FDA, EMA, and MHRA oversight, however, the path to AI adoption looks fundamentally different than what you’re reading about in mainstream tech headlines.
The work ahead is clear: helping our customers leverage this new capability in ways that improve product quality, enhance customer experience, and drive operational efficiency across their organizations.
The Reality Check: Not All AI Models Are Created Equal
The ability to leverage AI will likely prove more impactful than even most people realize. The capability to transform digital work processes using AI as part of the digital infrastructure is genuinely powerful, and we will see it deployed across industries.
However, there is a critical caveat for life science manufacturers: companies that are GMP product companies cannot use LLM neural network-based predictive models to perform GMP critical processes.
This distinction matters tremendously. While consumer-facing industries can experiment freely with large language models like ChatGPT, regulated manufacturers face a different reality. Any process that could impact patient safety (batch release decisions, formula modifications, quality control procedures) requires mathematical precision, not statistical prediction.
The solution lies in deterministic or machine learning models that use algorithms to calculate exact outcomes rather than predict probable ones. We cannot predict the right change to a formula with an active ingredient that could harm humans. We cannot leave open the possibility that the model predicts incorrectly.
EU GMP Annex 22: Clarity at Last
For manufacturers wondering how to navigate AI adoption while maintaining compliance, regulatory clarity is emerging. The EU’s Annex 22 guidelines provide the first comprehensive regulatory framework for AI use in GMP environments, representing a watershed moment for the industry.
The encouraging development is that these guidelines are being published through collaboration among all harmonizing agencies, including the FDA, MHRA, and other regulatory bodies. While Annex 22 will formally roll out in the EU in 2026, we can reasonably predict that these guidelines will be consistently adopted by harmonizing regulatory agencies globally.
The framework establishes clear boundaries: deterministic models for GMP-critical processes, LLM models for supporting business functions. This clarity provides our customers with a real path forward and represents a genuinely positive development for software developers building solutions in this space.
Beyond Quality: Understanding the Full Scope
When speaking with fellow CEOs about Annex 22, I emphasize that AI compliance extends far beyond the quality department. Annex 22 addresses all business processes or workflows that will be enabled using AI across the organization.
The regulation establishes three critical requirements that impact operations company-wide:
- Model Selection by Process Type
Organizations must evaluate every workflow and determine which AI model type is appropriate. Financial processes, customer-facing applications, and administrative tasks can leverage powerful LLM models. Manufacturing execution, batch release, and quality-critical processes require deterministic models. This distinction makes sense at a fundamental level: we cannot use predictive models for processes that must have precise answers to protect patient safety. - Ongoing Performance Testing
Initial validation is insufficient. Organizations must implement plans for ongoing testing as they use the software. The regulating bodies require assurance that AI performance does not deviate over time. This means building continuous testing into the digital infrastructure operating plan. At Merit, we provide that capability within our software layer. - Segregation of Duties
Perhaps most significantly, the regulation requires that personnel performing AI testing cannot be the same individuals who created the test data sets. The regulating bodies are addressing concerns about human bias in performance testing. This segregation ensures test accuracy and maintains control integrity.
The Microsoft Advantage: It’s All About the Data
When comparing ERP platforms for AI readiness, Microsoft’s integrated approach is uniquely positioned for life science manufacturers. The key differentiator is the Azure Data Lake.
Microsoft has a distinct advantage in that they apply AI across all aspects of the technical stack: the data layer, the application layer, and the security layer. A particularly important component for AI is the data lake. In Microsoft’s case, the Azure Data Lake is where all organizational data is consolidated for AI consumption and use.
While competitors require manufacturers to build hundreds of data pipeline integrations to connect disparate systems, Microsoft’s ERP automatically populates the Azure Data Lake. We recently spoke with one company that calculated they would need to build 200 data pipelines from just their ERP to Azure. That represents a dedicated team just to maintain those connections. With Microsoft, this is unnecessary. All data resides natively in the Azure Data Lake.
This architectural advantage becomes even more critical as AI evolves. If we accept that LLMs will continue to advance rapidly, then what differentiates one company from another is their proprietary data and their private foundation model. When AI can do “everything,” what makes an organization unique is its ability to leverage proprietary knowledge and data in a model that delivers customer experiences and advantages that competitors cannot replicate.
Microsoft’s strategy offers the integrated technical stack (the “chassis”) while allowing customers to plug in the most effective available AI engines, whether Claude, GPT-5, or future alternatives. This protects competitive advantage. Microsoft’s approach is to provide that controlling, integrated technical stack and offer the latest AI engines as plug-ins. Today those include Claude and OpenAI GPT-5. They allow organizations to use those engines in a security-protected way, integrated with their foundation model. When superior engines emerge, customers can adopt them seamlessly.
What matters most is that Microsoft helps protect both security and data privacy, and helps organizations protect what is unique and proprietary to them. This is Satya Nadella’s vision, and it represents the right approach for successful future business models.
Real-World Impact: A Case Study
A compelling example from our customer base illustrates this point. A high-growth biologics manufacturer became a Merit customer in 2024, then was acquired by a major global biopharma company in 2025. The acquired company needed to justify their Microsoft-based infrastructure decision to a parent company with established legacy ERP systems.
They successfully made the case. The argument centered on the integrated tech stack’s ability to protect their data, secure their AI models, and leverage their proprietary knowledge without building extensive integrations. The result validated their approach: not only did they maintain their Microsoft infrastructure, but the parent company decided to roll out the same stack to four additional portfolio companies.
We are observing that increased awareness and education is driving this shift. It is becoming a top decision criterion. Companies making infrastructure decisions today understand that their future differentiation will come from their ability to integrate proprietary knowledge and customer experiences into not just their data, but their actual operational processes.
Looking Ahead: Data as Competitive Advantage
As the life science industry navigates the AI revolution, manufacturers are shifting their thinking about digital infrastructure. They understand that future differentiation will come from their ability to integrate proprietary knowledge and customer experiences into not just their data repositories, but their actual operational workflows.
This represents a fundamental evolution from viewing data integrity as a compliance burden to recognizing it as a fundamental requirement of doing business in the life science industry. The tech infrastructure that allows you to leverage your data estate while maintaining privacy protections may allow you to create operational efficiencies or processes that give you competitive advantage.
For manufacturers evaluating their AI strategy, the message is clear: the technical decisions made today about ERP platforms, data architecture, and system integration can provide unique opportunities for competitive positioning tomorrow. Organizations should choose platforms that protect proprietary data while positioning them to leverage AI advances as they emerge, all within a compliance framework that is finally taking shape.
The AI transformation is real, as is the path to responsible implementation in regulated manufacturing. With Annex 22 providing regulatory clarity and integrated platforms offering secure infrastructure, 2026 marks the year when life science manufacturers can move confidently from AI speculation to AI execution.
At Merit, we made a strategic decision years ago to build exclusively within the Microsoft ecosystem, integrating deeply into Microsoft Dynamics 365 and the Azure tech stack rather than than pursuing a multi-platform approach. We recognized that native integration would deliver operational advantages that surface-level connections could never match. That decision preceded the AI revolution, yet it has positioned our customers at the forefront of what is now the industry’s most critical technological shift. As AI reshapes manufacturing operations, our commitment to the Microsoft platform has proven not just advantageous, but essential. Our customers now leverage a unified infrastructure where their proprietary data, compliance requirements, and competitive innovations are protected and amplified through the most advanced AI capabilities available. We chose our partner wisely, and that choice continues to define what is possible for life science manufacturers navigating the future of regulated production.
Yes, but only when the AI model type is appropriate for the process. GMP-critical processes require deterministic or validated machine-learning models, not probabilistic LLMs.
LLMs can support non-GMP-critical activities such as documentation drafts or analytics, but they cannot be used for processes that impact product quality or patient safety.
Annex 22 provides guidance on using AI in GMP-regulated environments, including model selection, validation, ongoing performance monitoring, and segregation of duties.
Deterministic models or tightly validated machine-learning algorithms that produce consistent, explainable outcomes are required for GMP-critical workflows.
Yes. Annex 22 and harmonizing regulatory guidance require continuous performance testing to ensure AI behavior does not drift over time.
AI and GMP Compliance: Frequently Asked Questions
Can GMP-regulated manufacturers use AI?
Yes, but only when the AI model type is appropriate for the process. GMP-critical processes require deterministic or validated machine-learning models, not probabilistic LLMs.
Can ChatGPT or other LLMs be used in GMP environments?
LLMs can support non-GMP-critical activities such as documentation drafts or analytics, but they cannot be used for processes that impact product quality or patient safety.
What does EU GMP Annex 22 say about AI?
Annex 22 provides guidance on using AI in GMP-regulated environments, including model selection, validation, ongoing performance monitoring, and segregation of duties.
What AI models are acceptable for GMP-critical processes?
Deterministic models or tightly validated machine-learning algorithms that produce consistent, explainable outcomes are required for GMP-critical workflows.
Does AI need ongoing validation under GMP?
Yes. Annex 22 and harmonizing regulatory guidance require continuous performance testing to ensure AI behavior does not drift over time.
