This is the claim of the chair of the Biomedical Engineering Department at the University of Houston, whose proposed network can immediately differentiate between images of healthy skin and skin with systemic sclerosis (SSc), which is characterised by hardened or fibrous skin and internal organs.
MORE FROM MEDICAL & HEALTHCARE
"Our preliminary study, intended to show the efficacy of the proposed network architecture, holds promise in the characterisation of SSc," reports Metin Akay, John S. Dunn Endowed Chair Professor of biomedical engineering. The work is published in the IEEE Open Journal of Engineering in Medicine and Biology.
"We believe that the proposed network architecture could easily be implemented in a clinical setting, providing a simple, inexpensive and accurate screening tool for SSc."
According to the University of Houston, early diagnosis of SSc is critical but often elusive. Studies have shown that organ involvement could occur far earlier than expected in the early phase of the disease, but early diagnosis and determining the extent of disease progression pose a significant challenge for physicians, resulting in delays in therapy and management.
In artificial intelligence, deep learning organises algorithms into layers – an artificial neural network - that can make its own decisions. To speed up the learning process, the new network was trained using the parameters of MobileNetV2, a mobile vision application, pre-trained on the ImageNet dataset with 1.4 million images.
"By scanning the images, the network learns from the existing images and decides which new image is normal or in an early or late stage of disease," said Akay.
Among several deep learning networks, Convolutional Neural Networks (CNNs) are most commonly used in engineering, medicine and biology, but their success in biomedical applications has been limited due to the size of the available training sets and networks.
To overcome these difficulties, Akay and partner Yasemin Akay combined the UNet, a modified CNN architecture, with added layers, and they developed a mobile training module. The results – after less than five hours of training - showed that the proposed deep learning architecture is superior and better than CNNs for classification of SSc images.
"After fine-tuning, our results showed the proposed network reached 100 per cent accuracy on the training image set, 96.8 per cent accuracy on the validation image set, and 95.2 per cent on the testing image set," said Akay, UH instructional associate professor of biomedical engineering.
Five ways to prepare for your first day
If I may add my own personal Tip No. 6 it goes something like this: From time to time a more senior member of staff will start explaining something...