By a recent count, GE HealthCare has obtained Food and Drug Administration clearance for 42 AI- and machine learning-enabled medical devices, the most of any company in the market. The Chicago-based medtech firm has been building out its Edison digital health platform, a suite of tools to bring together data from disparate sources, and apply algorithms to flag health conditions and help surgeons prepare for procedures.
The majority of GE’s AI devices are in radiology, including a feature to reduce the time it takes for an MRI scan while also improving image quality. The company also has a suite of algorithms embedded in its mobile X-ray systems for critical care, including features to identify critical care conditions and analyze endotracheal tube placement. GE has also expanded into cardiology and oncology, including a feature to simulate injections to shrink liver tumors.
Vignesh Shetty, a senior vice president responsible for the GE HealthCare’s Edison artificial intelligence platform, talked about the company’s approach to AI and how it ensures the tools it builds are accurate.
This interview has been edited for length and clarity.
MEDTECH DIVE: How do you make sure these tools fit into hospitals’ workflow?
VIGNESH SHETTY: We work closely and collaborate on both data and expertise between two broad worlds, one of practitioners and then one of developers. As you can imagine, we can't do this alone. So we try to be as inclusive as possible to the ecosystem of developers.
Our goal is to bring the two worlds together because what we see is [that] while both of them are passionately striving to solve the same problems, they aren't always talking to each other, or talking early enough. As a result, we see that some of these offerings do not address the right clinical or operational need, or they're not suitably integrated into the workflow.
One of the things that we invest a lot of time in as a company is to avoid some of these potential pitfalls by continuing to deeply understand the needs of the clinicians and the hospital systems. We also engage and train staff, with the philosophy being that these tools should always supplement — not replace — the need for human interaction.
When you talk about the platform being vendor agnostic, what does that mean?
[We ask] healthcare systems what the challenges and pain points [are that] they would like us to help address. They've got all of these different systems, whether that's an EMR, a pathology information system, a lab information system, where different aspects of the same patient data are stored.
These are unfortunately siloed systems, sometimes [from] incompatible vendors, and there are other data collection and collation issues. That leads to cognitive overload on the part of the caregiver, which ultimately causes clinician burnout and suboptimal outcomes for the patient. That's a key recurring theme here.
While there are a lot of honest attempts, there's a tendency unfortunately to build piecemeal solutions which only adds [to] that complexity. So what we're really trying to do [as we] build the Edison digital health platform is to enable outcomes that our customers care about by creating an ecosystem of applications, both GE and non-GE. So even if you don't have all of your CTs or MRI or ultrasounds [from] GE, that's perfectly fine.
How are you approaching regulators?
At FDA and the European Commission, we are in trade association dialogues and the regulatory review process to help promote the use of good machine learning practices.
As an example, we've been working with them on the algorithm change protocol. This is a concept that was introduced by the FDA, that was intended not to authorize only the first version of an AI algorithm but allow you to define an acceptable set of modifications that would not require either GE or any [company] to go back and require additional review by the regulator. Since its introduction, the FDA has applied this concept to a limited number of devices, due to some legislative challenges. We are in favor of extending this concept to most of the devices because that allows us to truly deliver on this promise of AI, which includes retraining and improving the performance [of an algorithm] over time, without the delay of a systematic review by the regulator, as long as it doesn't compromise the quality.
The expectation is that as we get into more clinically significant AI, it becomes not just better monitored, but more explainable. You can explain the cause-and-effect relationships, as well as ensure that it's free of bias by allowing for more frequent iteration and automation in the model retraining.
Are there any other areas on which you are focused?
The general emphasis on cybersecurity — because I do think that's another potential barrier for large scale.
I think while cybersecurity gets a lot of attention in the traditional software world, it doesn't get as much attention as it should in the AI space. Increasingly, hackers are starting to think about what does it take to reverse engineer the AI model by figuring side-channel attacks. Or, they're looking at potential phishing: How do I contaminate and poison the data, then I can influence how that model is trained? And that's by definition how the model behaves in the field. These are relatively new areas, which haven't been given as much thought as cyber in the traditional software space. So [we are] just raising awareness around that and trying to figure out what’s the best solution.
What’s the big-picture opportunity here?
If we do our job right, for the patient what this fundamentally means is they're going to see a more personalized experience than they have been historically used to. Because today, most of us when we go to the clinician's office, end up in a situation where the doctor has no option but to sort of multitask, trying to listen to you and typing a bunch of things into the computer.
We generally believe that AI has the potential to take a lot of the mundane but important tasks away from the clinician, so that frees them back to deliver actual care