These papers aim to boost a discussion with interested parties in the medical products development community on using AI/ML in drug and biologic development, and the development of medical devices to use with these treatments.
Artificial intelligence (AI) and machine learning (ML) are no longer futuristic concepts; they are now part of how we live and work. The U.S. Food and Drug Administration uses the term AI to describe a branch of computer science, statistics, and engineering that uses algorithms or models to perform tasks and exhibit behaviors such as learning, making decisions, and making predictions. ML is a subset of AI that uses data and algorithms, without being explicitly programmed, to imitate how humans learn.
AI/ML’s growth in data volume and complexity, combined with cutting-edge computing power and methodological advancements, have the potential to transform how stakeholders develop, manufacture, use, and evaluate therapies. Ultimately, AI/ML can help bring safe, effective, and high-quality treatments to patients faster.
For example, AI/ML could be used to scan the medical literature for relevant findings and predict which individuals may respond better to treatments and which are more at risk for side effects. Conversational agents or chatbots, which are based on “generative” AI, have the potential to answer people’s questions about participating in clinical trials or reporting adverse events. Digital or computerized “twins” of patients can be used to model a medical intervention and provide biofeedback before patients receive the intervention.
The regulatory uses are real: In 2021, more than 100 drug and biologic applications submitted to the FDA included AI/ML components. These submissions spanned a range of therapeutic areas, and sponsors incorporated the technologies in different developmental stages.
As with other evolving fields of science and technology, there are challenges associated with AI/ML in drug development, such as ethical and security considerations like improper data sharing or cybersecurity risks. There are also concerns with using algorithms that have a degree of opacity, or algorithms that may have internal operations that are not visible to users or other interested parties. This can lead to amplification of errors or preexisting biases in the data. The FDA aims to prevent and remedy discrimination — including algorithmic discrimination, which occurs when automated systems favor one category of people over other(s) — to advance equity when using AI/ML techniques.
To address these and other related concerns, the FDA has released two discussion papers:
Artificial Intelligence in Drug Manufacturing
Read the full article here