Nov 23
Blog

AI in cell and gene therapies: How to go from a reactive to predictive GMP production

Blog

AI in cell and gene therapies: How to go from a reactive to predictive GMP production

As for many other industries, artificial intelligence, or AI, will have a major effect on the way we produce cell and gene therapies. Just think about manufacturing operations, quality assurance, supply chains, regulatory activities, and more. It will allow us to evolve from seeing issues in hindsight, in other words being reactive, to being predictive and preventing issues before they even occur.

In R&D environments, for example, AI has already proven to be a game changer in terms of development speed and resource efficiency. 

Why then is our industry so conservative towards the adoption of AI guidance of GMP-regulated production processes? Perhaps the answer lies in not knowing enough about AI to feel comfortable making a decision on how we can successfully incorporate it into our processes. 

In this blog, we would like to suggest a way forward, starting with where we are today, what the future can hold, how to overcome the barriers to get there, and what we can do now to prepare for an AI future. 

Where we are now and the path towards digital enlightenment

Soon we will trust AI to drive our cars around autonomously. For the production of an autologous cancer therapy, for which there are no backups or safety nets, like seatbelts or airbags, we will want to keep a human in the loop who is able to take control over the wheel and overrule the algorithm if needed. 

We think AI will revolutionize the field of GMP-critical production processes. But just like with cars, it will come in phases:

Currently, we are well into the first phase and the “park assist”-like solutions that guide operators through GMP-critical production processes are becoming a reality – look no further than MyCellHub to see how operators can already benefit by digitalization.

We foresee major adoption in the near future as companies that don’t move into this first phase will fall behind both in terms of quality of the product and agility of the production process. However, if we want to be at the forefront of technology, we should already start planning for the third phase.

So what could be the barriers between now and full, phase 3 AI adoption? And how can we navigate around these potential blockers?

Hurdles and way forward

Safety and accountability

Is the judgement of a human operator really safer than the judgement of a computer algorithm?

There are +/- 15,000 process actions per batch in an autologous cell manufacturing process. Many steps still require expert insight from a human operator to make a certain decision. Even the most experienced operator is bound to make a mistake once in a while. In contrast, computers really never get tired and algorithms are performing consistently day after day and independent from one manufacturing site to the other. 

So, while the accuracy of the algorithms might have to be evaluated carefully, safety on its own cannot be the main hurdle for the adoption of AI.

The validation of these algorithms is, however, a challenge. Some of the most powerful data analytic strategies like neural networks and deep learning behave like “black boxes” which makes it virtually impossible to backtrack how these algorithms come to their conclusions. 

In the highly regulated pharmaceutical industry, where traceability of actions and decisions is key, these black box algorithms might lead to thorny accountability issues. What do you tell a patient if the black-box algorithm determined that your batch of autologous cell product cannot be released? Who do you believe in the case that the algorithm and an expert operator disagree? How do you manage the change control processes for self-learning algorithms?

Understanding the mechanism of action (MOA)

Essentially, what is missing in these algorithms is the concept of an understandable mechanism of action. One strategy to make business critical decisions (or GMP critical decisions in this case) is to make use of “white box” algorithms. These are algorithms of which the internal workings are easy to inspect and understand. For GMP-critical process decisions, this means operators would be able to know why certain decisions make sense.

Unfortunately, since white box models somehow crystallize the current process knowledge, our current lack of process understanding might make a purely white box algorithm underperform against the higher dimensional black box models. The reason being that black box models are able to make abstractions and correlations that the human mind is not even able to imagine. 

In this situation, a very promising strategy to facilitate the validation of a black box algorithm is to try and interpret (a lumped parameter model of) the highly complex black box models, effectively rendering it into a grey box model. These interpretable sub-systems can then be tested with data where there is a known link between the input and output data.

What we can do today to prepare for an AI future

Data quality and control of data life cycle

As per the popular phrase “garbage in, garbage out”, the data used to train the algorithm must be of sufficiently high quality. 

Do you allow your self-learning algorithm to include training data from non-GMP systems, i.e. data that is not subject to 21 CFR 11 and therefore potentially does not ensure 100% data integrity? For an AI solution in a regulated industry, you need to be in charge of the whole data lifecycle: from deciding which data to collect, over the actual data acquisition, to the analytics and reporting.

Data integration

An additional key challenge here is that your data is probably currently scattered across multiple systems like ERP, MES, LIMS, batch records, spreadsheets, and quality systems, etc. In order to efficiently use your data, you’ll need to centralize it first. 

If data integration is an afterthought and your data strategy relies on copy-pasting a dataset together, you won't get much further than an interesting proof of concept that will be difficult to scale in production. 

Fully automated processes would obviously help with this data integration challenge. However, cell and gene therapies will still require semi-manual workflows for at least the next 5 years. In the meantime, a lot of work is needed before the interoperability between all suppliers and manufacturers is a reality (check the SiLA initiative on this).

Data sharing dilemma

An unfortunate aspect of our highly regulated industry is that late-stage manufacturers are sitting on the biggest pile of high-quality data, but are discouraged to implement significant AI-driven process optimization as they cannot change their manufacturing processes anymore in later-stage clinical trials. Early-stage companies on the other hand would be able to build-in AI technologies from the start, but don’t have access to large amounts of high-quality data.

AI is our friend, not foe

With a step-by-step approach to incorporating AI into therapeutics, our industry can prepare today for creating predictive systems rather than relying on a reactive system like we have currently. We have safe and reliable ways to already bring data together to create the high-quality input that these systems will need. The great news is that the technology is ready for this first phase, and we can pave the way to full AI integration in the future. 

The question now is: Are you in?

As for many other industries, artificial intelligence, or AI, will have a major effect on the way we produce cell and gene therapies. Just think about manufacturing operations, quality assurance, supply chains, regulatory activities, and more. It will allow us to evolve from seeing issues in hindsight, in other words being reactive, to being predictive and preventing issues before they even occur.

In R&D environments, for example, AI has already proven to be a game changer in terms of development speed and resource efficiency. 

Why then is our industry so conservative towards the adoption of AI guidance of GMP-regulated production processes? Perhaps the answer lies in not knowing enough about AI to feel comfortable making a decision on how we can successfully incorporate it into our processes. 

In this blog, we would like to suggest a way forward, starting with where we are today, what the future can hold, how to overcome the barriers to get there, and what we can do now to prepare for an AI future. 

Where we are now and the path towards digital enlightenment

Soon we will trust AI to drive our cars around autonomously. For the production of an autologous cancer therapy, for which there are no backups or safety nets, like seatbelts or airbags, we will want to keep a human in the loop who is able to take control over the wheel and overrule the algorithm if needed. 

We think AI will revolutionize the field of GMP-critical production processes. But just like with cars, it will come in phases:

Currently, we are well into the first phase and the “park assist”-like solutions that guide operators through GMP-critical production processes are becoming a reality – look no further than MyCellHub to see how operators can already benefit by digitalization.

We foresee major adoption in the near future as companies that don’t move into this first phase will fall behind both in terms of quality of the product and agility of the production process. However, if we want to be at the forefront of technology, we should already start planning for the third phase.

So what could be the barriers between now and full, phase 3 AI adoption? And how can we navigate around these potential blockers?

Hurdles and way forward

Safety and accountability

Is the judgement of a human operator really safer than the judgement of a computer algorithm?

There are +/- 15,000 process actions per batch in an autologous cell manufacturing process. Many steps still require expert insight from a human operator to make a certain decision. Even the most experienced operator is bound to make a mistake once in a while. In contrast, computers really never get tired and algorithms are performing consistently day after day and independent from one manufacturing site to the other. 

So, while the accuracy of the algorithms might have to be evaluated carefully, safety on its own cannot be the main hurdle for the adoption of AI.

The validation of these algorithms is, however, a challenge. Some of the most powerful data analytic strategies like neural networks and deep learning behave like “black boxes” which makes it virtually impossible to backtrack how these algorithms come to their conclusions. 

In the highly regulated pharmaceutical industry, where traceability of actions and decisions is key, these black box algorithms might lead to thorny accountability issues. What do you tell a patient if the black-box algorithm determined that your batch of autologous cell product cannot be released? Who do you believe in the case that the algorithm and an expert operator disagree? How do you manage the change control processes for self-learning algorithms?

Understanding the mechanism of action (MOA)

Essentially, what is missing in these algorithms is the concept of an understandable mechanism of action. One strategy to make business critical decisions (or GMP critical decisions in this case) is to make use of “white box” algorithms. These are algorithms of which the internal workings are easy to inspect and understand. For GMP-critical process decisions, this means operators would be able to know why certain decisions make sense.

Unfortunately, since white box models somehow crystallize the current process knowledge, our current lack of process understanding might make a purely white box algorithm underperform against the higher dimensional black box models. The reason being that black box models are able to make abstractions and correlations that the human mind is not even able to imagine. 

In this situation, a very promising strategy to facilitate the validation of a black box algorithm is to try and interpret (a lumped parameter model of) the highly complex black box models, effectively rendering it into a grey box model. These interpretable sub-systems can then be tested with data where there is a known link between the input and output data.

What we can do today to prepare for an AI future

Data quality and control of data life cycle

As per the popular phrase “garbage in, garbage out”, the data used to train the algorithm must be of sufficiently high quality. 

Do you allow your self-learning algorithm to include training data from non-GMP systems, i.e. data that is not subject to 21 CFR 11 and therefore potentially does not ensure 100% data integrity? For an AI solution in a regulated industry, you need to be in charge of the whole data lifecycle: from deciding which data to collect, over the actual data acquisition, to the analytics and reporting.

Data integration

An additional key challenge here is that your data is probably currently scattered across multiple systems like ERP, MES, LIMS, batch records, spreadsheets, and quality systems, etc. In order to efficiently use your data, you’ll need to centralize it first. 

If data integration is an afterthought and your data strategy relies on copy-pasting a dataset together, you won't get much further than an interesting proof of concept that will be difficult to scale in production. 

Fully automated processes would obviously help with this data integration challenge. However, cell and gene therapies will still require semi-manual workflows for at least the next 5 years. In the meantime, a lot of work is needed before the interoperability between all suppliers and manufacturers is a reality (check the SiLA initiative on this).

Data sharing dilemma

An unfortunate aspect of our highly regulated industry is that late-stage manufacturers are sitting on the biggest pile of high-quality data, but are discouraged to implement significant AI-driven process optimization as they cannot change their manufacturing processes anymore in later-stage clinical trials. Early-stage companies on the other hand would be able to build-in AI technologies from the start, but don’t have access to large amounts of high-quality data.

AI is our friend, not foe

With a step-by-step approach to incorporating AI into therapeutics, our industry can prepare today for creating predictive systems rather than relying on a reactive system like we have currently. We have safe and reliable ways to already bring data together to create the high-quality input that these systems will need. The great news is that the technology is ready for this first phase, and we can pave the way to full AI integration in the future. 

The question now is: Are you in?

Stay in the know
Downright icon
Stay in the know
Downright icon
Stay in the know
Downright icon
Stay in the know
Downright icon
Stay in the know
Downright icon
Stay in the know
Downright icon
Stay in the know
Downright icon
Stay in the know
Downright icon
Stay in the know
Downright icon
Stay in the know
Downright icon
Stay in the know
Downright icon
Stay in the know
Downright icon
Stay in the know
Downright icon
Stay in the know
Downright icon
Stay in the know
Downright icon
Stay in the know
Downright icon
Stay in the know
Downright icon
Stay in the know
Downright icon
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.