The team said that the findings will boost AI-driven medical diagnostics and allow healthcare professionals to quickly and accurately diagnose patients with COVID-19 and other pulmonary diseases because algorithm can ‘comb through’ lung ultrasound images to identify signs of disease.
The findings culminate an effort that started early in the pandemic when clinicians needed tools to rapidly assess legions of patients in overwhelmed emergency rooms.
In a statement, senior author Muyinatu Bell, the John C. Malone Associate Professor of Electrical and Computer Engineering, Biomedical Engineering, and Computer Science at JHU, said: “We developed this automated detection tool to help doctors in emergency settings with high caseloads of patients who need to be diagnosed quickly and accurately, such as in the earlier stages of the pandemic.
“Potentially, we want to have wireless devices that patients can use at home to monitor progression of COVID-19, too.”
The tool also holds potential for developing wearables that track such illnesses as congestive heart failure, which can lead to fluid overload in patients’ lungs, not unlike COVID-19, said co-author Tiffany Fong, an Assistant Professor of emergency medicine at Johns Hopkins Medicine.
MORE FROM MEDICAL & HEALTHCARE
“What we are doing here with AI tools is the next big frontier for point of care,” said Fong. “An ideal use case would be wearable ultrasound patches that monitor fluid build up and let patients know when they need a medication adjustment or when they need to see a doctor.”
The AI tool analyses ultrasound lung images to spot features known as B-lines, which appear as bright, vertical abnormalities and indicate inflammation in patients with pulmonary complications. It combines computer-generated images with real ultrasounds of patients, including some who sought care at Johns Hopkins.
The researchers said that the software can learn from real and simulated data and then discern abnormalities in ultrasound scans that indicate a person has contracted COVID-19. The tool is a deep neural network, a type of AI designed to behave like the interconnected neurons that enable the brain to recognise patterns, understand speech, and achieve other complex tasks.
“We had to model the physics of ultrasound and acoustic wave propagation well enough in order to get believable simulated images,” said Bell. “Then we had to take it a step further to train our computer models to use these simulated data to reliably interpret real scans from patients with affected lungs.”
The research, published in Communications Medicine, can be read in full here, and the publicly available research code and data can be found here.
Promoted content: Does social media work for engineers – and how can you make it work for you?
So in addition to doing their own job, engineers are expected to do the marketing department´s work for them as well? Sorry, wait a minute, I know the...