As artificial intelligence proliferates across the medical device sector, the industry is seeing a shift. At the end of 2024, the Food and Drug Administration had reviewed more than 1,000 AI devices, most designed to detect or triage specific health conditions. Now, medtech companies are talking about the use of broader AI tools that can analyze images, text and other types of data across multiple contexts.
At the Radiological Society of North America’s conference last year, more speakers focused on foundation models, a term for models pre-trained on massive datasets that can be adapted to a variety of tasks. And at the beginning of the year, AI experts at medtech and radiology firms interviewed by MedTech Dive said their focus has shifted to foundation models.
Rowland Illing, Amazon Web Services’ global chief medical officer, discussed the trend and how AWS is partnering with companies around AI, including Illumina, Johnson & Johnson MedTech, Medtronic and Abbott.
This interview has been edited for length and clarity.
MEDTECH DIVE: Tell me about your background and how you got started at AWS.
ROWLAND ILLING: I'm an academic interventional radiologist by background. I trained in surgery originally, then did research into image-guided cancer therapy using medical devices. I took a medical device end-to-end through the regulatory process, and then retrained as a radiologist.
I was part of the largest imaging intervention network in Europe. We had over 300 medical centers across 16 countries. The only way you can do that kind of scale play is using cloud. And so that's where I first got to know cloud and how to implement AI on top.
I was working as chief medical officer for the Affidea Group, and realized that to try and work with 300 different medical centers, all with different IT platforms, doing things slightly differently — not being able to integrate that data — was really difficult. The best way to deploy AI really is at cloud level. Because having to implement AI on a center-by-center basis is really difficult — to deploy it locally and to manage it locally, and then fix it locally if it goes wrong.
That's really where I first got to know AWS, because all of the AI that we were adopting across all the countries was built on AWS.
What types of AI are you working with right now? Is most of it generative AI?
We're seeing a massive explosion of generative AI in use cases. It doesn't stop all of the other AI that's been happening for ages [from] going on. There are over 1,000 applications now that have FDA approval that contain AI. Most of that is narrow AI, and has been quite well established. A company like Icometrix doing brain imaging at scale, looking at scarring for multiple sclerosis, they can do a really good job of brain imaging and segmentation. That’s just good old machine learning.
So a whole bunch of use cases are still there, but I think we’re seeing an absolute explosion of generative AI cases, especially with building foundation models.
A lot of the imaging foundation models we're seeing [are] being built out on AWS today. GE [Healthcare] has an imaging foundation model around MRI. We’ve got Harrison.AI in Australia, we’ve got Aidoc out of Israel, HOPPR in the U.S. The interesting piece being that it's not just large language models; they’re large data models and with multimodal inputs. So DICOM imaging, they’ve got biological foundation models using genomics as well as language. The integration of all the different data types is really interesting in terms of extracting extra information.
How are you approaching generative AI with the FDA?
We're also working with the FDA. The FDA platform is leveraging generative AI to synthesize information being given to them by drug and medical device companies in order to make sense of their applications.
They’ve got a platform called FiDL, which is a platform we’ve been working with them on for a number of years.
What does your work with foundation models look like in the medtech field?
We want to build the best infrastructure on which foundation models get built in general. Our view of the foundation model piece is there’s actually going to be hundreds, if not thousands, of different foundation models, each with very specific use cases. There will be very specialist models that are built to address specific tasks, imaging being one of those things. If you can have a very large data model with a lot of different imaging types, and it doesn't look at a very narrow piece of the imaging, it looks at the imaging in total.
At the moment, radiologists, when they look at a scan, there's tons of data that the human can't see. So the really interesting thing about foundational models is actually what is in there that potentially goes beyond the ability of humans to interpret. And so we're working with GE and Phillips and HOPPR to ingest vast amounts of data, and with the reports against those scans, to say, “If you put in any type of scan, how do you get a report out of it?” So just a base model for imaging you can start using out of the box. And then how can you start building those into new applications? How do you securely manage that foundation model and mix it with your own data?
So once the likes of GE have built their foundation model, they'll actually be able to surface it, and then that will be able to be used by third parties to then build the next generation of imaging applications.
What kinds of applications can companies build using foundation models?
It could be MRI or ultrasound or plain film or CT, so the different types of imaging scans. I spend a lot of my time as a radiologist drawing around lumps. An example of narrow AI is, [for] lots of liver scans, you draw around a lump in the liver, and you basically point the AI to it and say, “this is a lump.” So you’ve got a really well trained AI that can identify a lump in a liver, but it couldn't necessarily then identify a lump in a bone on the same scan.
And so the benefit about these foundational models, they’re trained on millions of images with the full text report. The models, in the end, will be able to look at the scan in its entirety, they'll be able to look at the bones, the muscles, the liver, the lungs, the kidneys, and be able to have a comment about all of it.
Often when a radiologist looks at a scan, they are directed. Maybe there's liver pain in the right upper quadrant, so I say, “I’m going to look at the liver.” You may not, as a radiologist, be looking at the bone. AI can look across the entirety including the bone. I think that's a very interesting attribute to some of these foundation models.
Now they're not perfect. There still needs to be a human in the loop, and also it needs to be fine tuned on whichever dataset you're looking at, because a CT scan from one company or from one country may not look similar to another one, but having that base model trained to be quite accurate out of the box and then fine-tuned on the data from a very specific center or region will improve the accuracy again. So I think that's the direction that we’re seeing.