Large language programs such as ChatGPT and its followers have hogged the spotlight this year, leaving their audience calling for new debates on their ethics and safety. But as the medtech industry watches from the wings—hearing that artificial intelligence has finally, finally arrived and will now begin to reshape our lives, for good or ill—many developers in healthcare are wondering where people have been for the past decade.
In October, the FDA put out an updated list of AI and machine learning-enabled medical products that have undergone review and obtained regulatory green lights—featuring nearly 700 entries, with some dating as far back as the mid-’90s. This includes programs that have demonstrated value by automatically cleaning up brain scans, identifying dangerous acute conditions or quickly catching the signs of diseases ranging from cancer to epilepsy.
And the agency pains to point out that none of them (so far) have been built on generative AI or the large language models that have been powering today’s impressive chatbots and image-making meme machines.
“What a lot of folks don't realize is that AI is already playing a significant role in medical imaging,” said Peter Shen, North American head of digital health for Siemens Healthineers, explaining that it’s used in “everything from the simple tasks during a patient’s radiology exam—such as their positioning, to make sure that we get the most precise anatomical images—to utilizing AI to enhance those images and make sure the clinician is able to read them.”
Radiology currently holds the crown for the steadiest increases in the FDA’s AI catalog. More than 80% of nearly 250 devices authorized between 2022 and mid-2023 have been claimed by the specialty, as picking out the complex patterns in imaging data lends itself well to a machine learning approach.
Some generative AI programs, meanwhile, offer promise in being able to process multiple kinds of data at once and then output them in different ways. But, like the lessons many learned in developing fit-for-purpose machine learning programs over the past several years, any promise to change how healthcare operates is a big promise.
“Foundation models and LLMs add a new dimension to AI algorithm development,” Shen said in an interview with Fierce Medtech, describing how Siemens Healthineers is already exploring their use in generating diagnostic reports that incorporate additional types of medical data, such as lab results and pathology findings, to provide a picture of a patient with more complete context.
“But our belief is that we’ll apply the same standards and accountability that we've established for the AI and machine learning algorithms we've already developed in medical imaging,” he said.
And as the generative AI boom sends shockwaves through healthcare, it has created opportunities for investment. The European life sciences venture capital firm Sofinnova Partners kicked off a $200 million fund this year aimed at digital medicine, which it defines as the new intersection of biology, data and computation.
“Foundation models could very well be the conduit to true precision medicine,” said Sofinnova partner Edward Kliphuis. “Not only can they benefit from the knowledge of all, they can also address the variability of everyone.”
“If you think about what we've done so far before, we've applied engineering approaches to biology, and that simply doesn't work,” he said of today’s one-size-fits-all treatments. “You can build 10 of the exact same BMWs, but you cannot create 10 of the same human beings.”
Additionally, with its ability to digest seemingly endless masses of data, large language programs could offer physicians a chance to keep up with the latest scientific advances.
“The implications for generative AI across the healthcare continuum are significant not only because it has the ability to impact everything on the research and development side, but also over on the clinical side,” said Evan Melrose, M.D., CEO of Elevage Medical Technologies, the medtech backer launched this year out of Patient Square Capital.
“As a doc, I still see patients… and being able to access, at your fingertips, not just the current medical libraries—all the publications, books, textbooks, etc.—but all the current research, and having that be available to the doc and even to the patient is incredibly enabling,” Melrose said.
“Historically in medicine, by the time a textbook is published, the data in there is already two years old, by the time it makes it through the editorial staff and publication,” he added. “I think that, over time, ways will be developed that will make that system much more efficient.”
So, what does generative AI need to succeed?
“What's most important is helping users understand why the algorithm has made the clinical decision that it's made—really, what’s the medicine behind that,” said Shen, who has called for a new type of Hippocratic oath among healthcare AI developers.
“Holding ourselves accountable to that is an important aspect that will help with adoption and will help remove this connotation that it’s a black box,” he said. “We need to be able to say to the clinician: This is how this AI algorithm works, this is the intended use, this is the patient population that it's been trained on, and this is the patient population it should be applied towards.”
At the FDA, the agency has announced plans to enlist a panel of outside experts to help it get deeper into AI’s weeds. The new advisory committee, set to meet for the first time next year, will also focus on digital health, cybersecurity, therapeutic apps, wearable devices, remote patient monitoring programs and virtual reality tools.
The committee won’t be fielding product-specific questions—like the FDA’s separate advisory panels for examining new cancer drugs and medical devices—but it will provide counsel on cross-cutting technical and scientific issues as the agency develops new policies.
Generative AI, with its many potential uses, may further blur the lines of how to classify and clear a machine learning-powered medical product, and concerns about the technology’s accuracy and validity may bring unique risks, as some systems have invented answers with no basis in reality, dubbed hallucinations.
“What I think we’ll see is that a lot of these technologies, where we have a single algorithm that performs a single task, will become redundant,” Kliphuis said. “The virtues of democratizing machine learning may also mean the end of some of these single-algorithm companies.”
“But the other thing is, of course, the downside,” he added. “If hallucinations are a problem, maybe not getting into the generative side of AI is actually more beneficial. … There are all kinds of questions around hallucination that need to be addressed. But within these two extremes? It’s fascinating.”
But what insights could a foundation model trained on the inner workings of cells deliver, compared to those trained on digitized literature and downloaded internet forums?
“If you extrapolate that into biology—because the base unit of our written language is the 26 letters in our alphabet, plus some spaces, exclamation points, etc., but in biology it is As, Cs, Ts and Gs—that makes amino acids and then proteins,” Kliphuis said.
“We've seen some of these academic models actually follow the same trend lines that the large language models trained on natural language have followed,” he said. “So, my prediction is that there's going to be an enormous amount of money thrown at this because the biggest challenges are gathering compute and data.”
That may mean that, to start, only the players with the deepest pockets will be able to participate in generative AI development.
“What we're all here for is to help improve the level of care that patients receive,” said Melrose. “The tools that will be used to get there aren't even out today, yet. We're in the first inning of a very long game.”