From diagnosis and pathology to drug discovery and epidemiology, healthcare's reliance on large amounts of data makes it one of the most exciting frontiers of artificial intelligence
Slowly but surely, artificial intelligence is infiltrating almost every aspect of our lives. It is already busy in the background of many routine tasks, powering virtual assistants like Siri and Alexa, recommendations from Amazon and Netflix, and underpinning billions of Google searches each day. But as the technology matures, AI’s impact will become more profound, and nowhere is that more apparent than in healthcare.
Healthcare’s data-heavy nature makes it an ideal candidate for the application of AI across multiple disciplines, from diagnosis and pathology to drug discovery and epidemiology. At the same time, the sensitivity of medical data raises fundamental questions around privacy and security. This juxtaposition makes healthcare one of AI’s most exciting frontiers and also potentially one of its most dangerous.
“You could look at almost any area of healthcare and see that advanced data science – if I could put it that way – has an enormous amount to offer,” Sir Mark Walport, chief executive of UK Research and Innovation (UKRI), told The Engineer. “This technology has huge potential right across the world of healthcare.”
As the former chief scientific adviser to the government and now the head of the UK’s new umbrella R&D body, Professor Walport is uniquely placed to comment on the UK’s AI healthcare policy. As part of the government’s Industrial Strategy, five new AI research centres were announced in November last year, to be located in Leeds, Oxford, Coventry, Glasgow and London. Backed by a £50m investment and due to open in 2019, these centres will focus on the rapidly advancing area of image analysis.
“This is part of the Industrial Strategy Challenge Fund and it’s about making sure we create the best opportunities, both in terms of improving healthcare and also ensuring that we grow a really important industry,” said Walport. “And through our other programmes we’re funding a great deal of research more broadly. But this (imaging) was a deliberately focused call in an area where there’s obvious potential for transformation.”
Roughly 90 per cent of all healthcare data comes from imaging technology, yet the vast majority of it goes unanalysed. The main areas of focus for the UK centres will be radiology – the science of medical imaging – and histopathology, which deals with changes in tissue caused by disease. Applying AI across these two disciplines could reshape medical diagnostics. According to Walport, the ultimate goal is to train AIs across multiple diseases so that they can suggest potential diagnoses from an X-ray, for example.
“A chest X-ray is a chest X-ray, as it were, and there are a whole range of diagnoses that can be made from any X-ray, which vary from infectious disease right through to cancer,” he explained.
Walport stresses, however, that AIs will be assistive and won’t replace medical professionals any time soon. “The potential here is to have a computer algorithm assisting humans in what are quite difficult diagnoses,” he said.
“We’ve seen some quite prominent recent examples in the media, for example the work between Google DeepMind and London’s Moorfields Hospital in the early diagnosis of macular degeneration using routinely collected images.”
Founded in 2010 and acquired by Google in 2014, DeepMind is one of the world’s leading AI companies, best known for its AlphaGo programme that toppled the world’s best players at the strategy board game Go. Since 2016, its medical arm DeepMind Health has been working with Moorfields Eye Hospital, training software to diagnose a range of ocular conditions from digitised retinal scans. That work resulted in an AI system able to recommend the correct referral decision for over 50 eye diseases with 94 per cent accuracy, matching the performance of top medical experts.
The media reaction that followed was overwhelmingly positive, but in another NHS collaboration, DeepMind Health generated less favourable headlines. Working with the Royal Free Hospital, the company developed an app called Streams that brings patient information together in one place to provide better visibility on life-threatening conditions such as sepsis and kidney failure. The app has been hailed as a success, but the Royal Free was criticised by the Information Commissioner’s Office (ICO) for the way in which the data of 1.6 million patients was shared with DeepMind.
In the wake of this, DeepMind made numerous changes, including the creation of an independent review board. However, in November 2018 it was announced that DeepMind Health would be fully incorporated into Google in California to allow Streams to be rolled out globally. This not only resulted in the scrapping of the review board, it reneged on a promise made by DeepMind that NHS data would never be connected to Google accounts or services.
DeepMind’s pioneering work highlights what’s possible for AI in healthcare, both good and bad. Its technology is already delivering impressive results, but its data policy and ever-closer union with Google give cause for concern. People may generally be happy for their data to be used constructively, but companies have a duty to be transparent on the extent of that use, as well as protect data from nefarious parties.
“When you ask people who are ill whether they want their data used for their benefit, there’s not much doubt about the response,” said Walport. “But equally, there is a responsibility to hold their data in a fashion that is confidential, that on the one hand allows the benefits to be delivered, but on the other hand protects against any potential risks.”
GE Healthcare is one of several prominent industry partners looking to harness data as part of the UK’s new imaging project. It forms part of the Oxford-based group NCIMI (National Consortium of Intelligent Medical Imaging), which is aiming to employ AI for more personalised care, earlier diagnosis and targeted treatment.
“The initial tranche of project areas for us are in a couple of different modality areas,” John Kalafut, imaging outcomes and solutions leader at GE Healthcare, told The Engineer. “One is molecular imaging – improving the quantitative reconstruction using deep learning methodology.”
The other is radiography, or plain X-rays. With all the bells and whistles of modern technology, the humble X-ray still plays a fundamental role in medicine, used to detect everything from fractures to cancer. It is estimated that at any given time in the UK there are over 300,000 X-rays which have been waiting more than 30 days for analysis. NCIMI will see GE Healthcare and its partners looking to reduce that significantly.
“Obviously one of the challenges the NHS has is not enough radiologists or radiographers to read studies,” Kalafut explained. “So we have some algorithms we’re developing with other partners and we’re going to adapt that and clinically validate that across this consortium.”
In the fast-moving world of startups, different radiology techniques are also getting the AI treatment. London-based Lancor Scientific is behind a new type of diagnostic device that, combined with AI, could change the way that cancers are detected. Its underlying technology, known as Opto-magnetic Imaging Spectroscopy (OMIS), is based on Maxwell’s equations in mathematics, which deal with electromagnetism. When light is shone onto human tissue, the magnetic component of the reflected light can determine how malignant it is.
“Our method of detecting cancer is based on an aspect of cancerous tissue, or cancerous proteins, that has been known for several decades in science, which is that the electromagnetic profile changes as new molecules begin to form,” explained Aamir Butt, Lancor Scientific’s founder and CEO.
Until now, the equipment required for this type of testing – magnetic and atomic force microscopes (MFMs and AFMs) – has been both huge and expensive, limiting its penetration. Lancor’s Tumour Trace technology uses the same principle as these enormous machines, but employs light rather than radiation to detect electromagnetic changes. The device weighs about 5kg and a single test should cost no more than £10, making the technology much more widely accessible. During trials at Southend Hospital in 2018, Tumour Trace detected cervical cancer with more than 90 per cent accuracy. Using AI to remove biological noise from the signals, Butt believes this can be improved to 97 per cent, and that the device could eventually be used to screen for all cancers. “Those trials are to do with cervical cancer, and we’re also just in the final stages of setting up a cancer research laboratory on the invitation of the Austrian government in Graz,” he said. “Because of that facility, by the end of 2019, we will have four more cancers enabled on the device.”
Ordinarily, vast data sets are required to train neural networks, with brute force repetition refining the machine intelligence. Tumour Trace, however, relies on fundamental physical constants for its detection method, and requires much smaller pools of data than other diagnostic AIs as a result.
“When you train artificial intelligence algorithms you need to have tens, if not hundreds of thousands of samples,” Butt explained. “But because our technology is based on an underlying signal based on fundamental physics, the number of samples we need is actually much lower.”
The device is due to hit the market in 2019 and Lancor expects to produce more than 10,000 units over the next five years. Theoretically, that could provide in the region of 500,000 cancer screenings per day, a powerful weapon in the frontline battle against a disease that impacts so many. Where early diagnosis of cancer can dramatically improve survival rates, for some diseases – such as Alzheimer’s – early detection is less about survival and more about understanding and management. Given the planet’s ageing population and the fact that Alzheimer’s has no known cure, the number of people with dementia is set to triple to 150 million by 2050. It has been described as a ticking time bomb for global health.
“This is, according to the WHO, the biggest healthcare challenge of the 21st century,” Sina Habibi, founder and CEO of Cognetivity, told The Engineer. “One in two in wealthy countries and nine in 10 in developing countries never receive diagnosis.”
According to Habibi, existing dementia testing is primitive, with doctors conducting simple Q&As about patient memory. MRIs and CT scans can help, but these are expensive and referrals often come back negative for dementia. Using AI, Cognetivity has developed a simple screen-based test that it believes could transform diagnosis of the disease.
“Since 1901 when Dr Alzheimer examined and characterised his first patients, the diagnosis process has not changed much,” Habibi explained. “We believe that from the time the physio-chemistry of the disease starts, it takes 20-25 years until you have symptoms of memory loss to the level that triggers the full-on process of examination.”
Cognetivity’s test involves subjects being shown around 100 images in five minutes. These are flashed by for just 150 milliseconds, with subjects required to respond as quickly as possible whether they’ve seen an animal or not. Images vary in their complexity, some featuring more information than others, making it more difficult to detect animals with certainty.
“We’ve looked at something called visual cognition – how you perceive images,” said Habibi. “When you look around your eyes capture information like a camera and your brain analyses the information inside those images. Your ability to understand what’s inside the image and respond to it is affected by the disease.”
The 100 responses provided over the course of a test are then compared with historical data, and this is where the AI component kicks in.
“We have trained our AI based on collecting data from healthy people and people with the condition, and we know if subjects have responded similarly to people with the condition or to healthier people,” Habibi said. “Another aspect of the AI is we can link the test results with the physical status of the patient, link it with all the (personal) historical data of the patient: whether they’ve had a history of traumatic brain injuries or strokes etc.”
The technology is currently being trialled at South London & Maudsley Hospital, with the study due to finish later this year. So far, the company’s attention has focused on dementia almost exclusively, but Habibi believes visual cognition could also underpin AI testing for other neurological conditions.
“Our test, the way we look at it, is a platform technology,” he said. “It’s a new approach to assessing the brain, and by training the AI we will be able to be more specific to different conditions resulting in cognitive impairment. We will have studies lined up for concussion and other neurodegenerative diseases. There’s huge potential, but we need to get clinical data, a trained AI and validate it.”
Alzheimer’s, eye disease, X-rays, cancer – it seems no area of medicine will go untouched by the power of AI. And as healthcare systems around the world creak under ever greater strains, machine intelligence may well hold the key to better human treatment.
Onshore wind and grid queue targeted in 2030 energy plan
NESO is expecting the gas powered turbines (all of them) to run for 5% of the time!. I did not realise that this was in the actual plan - but not...