AI in radiology and medical imaging. If we program the computer to read and compare the imaging data along with the final diagnosis it can form a map of what change on images will lead to what diagnosis. This when combined with supporting diagnostic tests can lead to a very accurate diagnosis.
Lets start with the history of AI in radiology. The last century saw the greatest technological innovation that mankind had ever seen, from the mass introduction of cars and the invention of airplanes to the invention of the computer and pocket smartphones we achieved growth that had never been visualized in the past. The vast introduction of computers to the market brought alongside it a wave of easy accessibility and widespread adoption into the workspace.
The healthcare innovations that came alongside this were the introduction of EMRs (Electronic medical records) and EHRs (Electronic health records) on a broad scale and the introduction of computer-generated and analyzed diagnostic reports on a smaller scale. Nowadays in hospitals, you can observe screens everywhere from the Operation Room to the Emergency Department. This can clearly demonstrate the importance of adoption in this field.
So coming to the topic of this article, what is medical imaging? It is the utilization of x-rays and magnetic spectrums to build an internal image of the body. The internal image can be visualized either as plain x-rays or CT scans using ionizing radiation. Or it can be constructed using MRI which uses the magnetic spectrum. Such technology has done wonders for the health space as one can visualize the internals of the body without cutting someone open or taking unnecessary biopsies. The technology was great but it relies heavily on human interpretation of the images generated. Radiologists, docs who specialize in reading these images are most capable. Don’t assume that I am about to bash the radiologists for not reading the images correctly but they are limited by two things.
- The human eyes cannot differentiate the various shades of grey from others that appear on these modalities
- The radiologists mostly rely on their past experience and knowledge to decipher the image. Humans, being mortal creatures can only have a past experience as much as their working experience and not more.
Now, these arenas are where significant improvement is desired. As healthcare is the only field where a slight error can lead to a drastic consequence on a daily basis.
For many decades people were stuck as there was no way of going about this hurdle. With the recent vast adoption of EMRs in which every single radiology image of every patient is stored along with other diagnostic reports and final diagnosis of the patient, a way forward became evident. And this was no other than Artificial intelligence. Yes, you guessed this correctly. Brainiac (AI character in Superman) can now solve this problem.
So how will AI do this? Well if we program the computer to read and compare the imaging data along with the final diagnosis it can form a map of what change on images will lead to what diagnosis. This when combined with supporting diagnostic tests can lead to a very accurate diagnosis.
Now the AI needs
- Lots of images to train it. Which is not a problem nowadays.
- It also requires great data entry into EMRs by humans (which can be a setback but at least we are trying).
- It also demands great coding so that no data is lost nor is it traceable to any individual (Safety concerns are a very big issue in this space).
- Fast network of interconnected computers that can visualize the data. (Beware of a new giant mega-corporation)
So keeping all jokes aside it makes sense to include AI in this regard because it will immensely improve the quality of care in the hospitals. But it is a challenge nonetheless.
A variety of new companies and startups are trying to improve this space by implementing AI. One such example is xylexa
It is focusing on automating breast mammograms so that routine breast screening can be done easily without the excessive burden on radiologists. It uses a program called XyCad.
The program can visualize and analyze different shadows on mammography and recommend a biopsy or clinical examination whichever is necessary.
Radiology artificial intelligence (AI) was again the hottest topic at the Radiological Society of North America (RSNA) annual meeting in December 2019. Many AI companies had acquired booths that demonstrated their AI mangnificance. Some AI software were specifically designed to detect changes in the imaging of a particular patient over time. And compare these to his disease status i.e for prognostic purposes. While others were utilizing it for broad diagnostic detection.
GE Healthcare is integrating automatic image measurements into its POCUS ultrasound systems and allowing data generation from repeated imaging of the same patient over time.
Some companies such as Tera-Recon are building a marketplace for these AI so a facility can pick and choose between them. The image demonstrates a heart valve that has been accurately detected by the AI program on MRI.
Qure Ai has developed a plug-in box that can be used widely for the detection of tuberculosis in screening programs and remote facilities without the need for an internet connection. It can achieve mass screening in developing countries without the gross need for imaging analysis dependency on radiologists.
So the future of AI in medical imaging is a great one. And even the radiologists are supportive. It will lead to the mass data chunks getting analyzed by AI and allow the radiologists to focus on rare diagnostics and special cases.
A joint venture between RSNA and IBM researchers is Eyes of Watson shows how machines of the future may be able to assist radiologists. The eyes of Watson are a clinical deep learning and processing AI that has the capability to fully recognize and diagnose the patient’s imaging studies. With advances in machine learning and artificial intelligence, a new role is emerging for machines as intelligent assistants for radiologists in their clinical workflow. This recording of the Eyes of Watson demonstration from the Radiological Society of North America (RSNA) 2016 annual meeting shows how a clinician can select a case from various sub-specialties, attempt to make a diagnosis, and see how a work-in-progress Watson technology attempts to assist the same case. (2)
In the past, it was an arduous task to achieve a computing level capable of Deep learning technologies but today these machines are widely available and are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks to make the dream of AI in medical imaging a reality. Architectures such as convolutional neural networks, recurrent neural networks, and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing.
With the dawn of the new decade, diagnostic radiographers will need to learn to work alongside their ‘virtual colleagues’. AI in radiology and medical imaging is the future they would have to accept. We need to implement vital changes and national professional capabilities to ensure machine‐learning tools are used in the safest and most effective manner for patients. So that no breach of data or privacy occurs while strengthening the neural network of AI databases.
A lot of work is coming forth in this field by the University of Hawai‘i Cancer Center, where Mr. John Shepherd, Ph.D., founder, and director of the AI-PHI, and his colleagues created the first Hawai‘i and Pacific Islands Mammography Registry and are designing a study that will analyze mammograms from 5 million women on 5 continents using deep learning. Dr. Shepherd is developing novel biomarkers and conducting research in the following four areas (3):
- Bone density and body composition in which research is focused on the combination of DXA and bioimpedance measures to describe fat and muscle status in athletes.
- Improvement in the ability of mammography where it is being used to detect cancer using deep learning models for reading mammograms to reduce overall recall rates and unnecessary need for biopsies for women.
- 3D optical whole body scanning is being employed for quantifying body shape and detecting morphological changes.
Certain advancements of note in the current arena and upcoming advancements are as follows (3)
- The CheXNeXt algorithm developed by Stanford researchers reads X Rays for 14 different pathologies. It matched the consistency of expert radiologists in recognizing the patterns but while the experts took 4 hours the algorithm was able to do it in less than 3 minutes.
- Nova Vision Group’s AI is now capable of analyzing over 100 diseases with 97% accuracy.
- Researchers at the University of Warwick have developed an AI algorithm that can reduce the processing time for abnormal chest X-rays from 11 days to just 3 days. The study “Automated Triaging of Adult Chest Radiographs with Deep Artificial Neural Networks,” was published in Radiology on Jan 22, 2019 (Mauro et al. 2019).
- Radiologists are using brain scans to try to detect Alzheimer’s by looking for reduced glucose levels across the brain using AI systems.
References for AI in radiology and medical imaging:
Want to read more about AI in radiology and medical imaging. Learn more here.