Dive Brief:
-
Facebook has teamed up with the New York University School of Medicine to use artificial intelligence to reduce the time it takes to perform MRI scans by 90%.
-
The partners think artificial neural networks can turn partial scans into complete images by predicting missing information, eliminating the need to keep patients in MRI machines until a complete dataset is generated.
-
Cutting the time it takes to perform the scans could alleviate capacity constraints and enable more people to undergo MRI rather than X-ray and computed tomography imaging, which entail the use of ionizing radiation.
Dive Insight:
MRI machines create very detailed images of soft tissues, making them the go-to equipment for everything from examinations of the brain to assessments of torn ligaments. However, the process used to generate the detailed images is time consuming. The machines use a magnet, radiofrequency energy and a computer to build layers of cross-sectional images. When scanning a large area, the process can take more than an hour.
Throughout that time, the patient must lie still inside a tube that performs the scan. This can be an issue for children, claustrophobic people and anyone who suffers from pain while lying down. For some MRI procedures, patients must hold their breath for short periods of time during the scan.
The duration of MRI scans affects the wider healthcare system, too. With machines tied up for 15 minutes to an hour per procedure, the number of patients who can be scanned in a day is limited, leading to backlogs at facilities that lack enough equipment. Cutting MRI scan times would enable facilities to perform more procedures and allow those who struggle to lie still for long periods to benefit from the procedure.
To realize those benefits, NYU is exploring using technology to reconstruct data generated during fast MRI scans. These scans only capture some of the required data. The resulting image lacks the detail needed to be useful to physicians but artificial neural networks may be able to fill in the blanks.
NYU researchers published a paper on their efforts in the area last year. The work suggested the idea has potential, but equally showed it is still some way from being used in the real world, where pixel modeling errors could cause incorrect cancer diagnoses. Trying to improve the system, NYU has teamed with researchers from the Facebook Artificial Intelligence Research (FAIR) group.
"We have some great physicists here and even some hot-stuff mathematicians, but Facebook and FAIR have some of the leading AI scientists in the world," Dan Sodickson, director of the Center of Advanced Imaging Innovation and Research at NYU, told TechCrunch.
The collaborators will use 3 million MRI images of the brain, knee and liver from 10,000 clinical cases to train AI models to complete partial images. In disclosing the collaboration, Facebook stressed that all the images are de-identified and none of its data will be used in the project.
At this stage it is unclear when the project will yield a useable AI, if it ever does. The partners are already laying the groundwork for that day, though. All AI models, baselines, evaluation metrics and imaging data generated and used in the collaboration will be open-sourced, enabling third-party researchers to ensure the work is reproducible and build on its findings.