Using AI and old reports to understand new medical images

0


[ad_1]

Example of an image-text pair of a chest x-ray and its associated x-ray report. Credit: Massachusetts Institute of Technology

Getting a quick and accurate reading of an x-ray or other medical images can be vital to a patient’s health and can even save a life. Obtaining such an assessment depends on the availability of a competent radiologist and, therefore, a rapid response is not always possible. For this reason, explains Ruizhi “Ray” Liao, post-doctoral fellow and recent doctorate at the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT, “we want to train machines capable of reproducing what radiologists do on a daily basis.” . Liao is the first author of a new paper, written with other researchers from MIT and Boston-area hospitals, which is presented this fall at MICCAI 2021, an international conference on medical informatics.

While the idea of ​​using computers to interpret the images is not new, the MIT-led group is relying on an underutilized resource – the vast body of radiology reports that accompany the medical images, written by radiologists in routine clinical practice – to improve the interpretation capabilities of machine learning algorithms. The team also uses a concept in information theory called mutual information, a statistical measure of the interdependence of two different variables, to make their approach more effective.

Here’s how it works: First, a neural network is trained to determine the extent of a disease, such as pulmonary edema, by presenting it with numerous x-ray images of patients’ lungs, along with an evaluation by a doctor for the severity of each case. . This information is encapsulated in a collection of numbers. A separate neural network does the same for text, representing its information in a different collection of numbers. A third neural network then integrates the information between images and text in a coordinated fashion that maximizes mutual information between the two sets of data. “When the mutual information between images and text is high, it means that images are highly predictive of text and text is highly predictive of images,” says MIT Professor Polina Golland, principal investigator at CSAIL.

Liao, Golland and their colleagues introduced another innovation which confers several advantages: rather than working from whole images and radiology reports, they break down the reports into individual sentences and into portions of those images to which the sentences relate. . Doing it this way, says Golland, “estimates the severity of the disease more accurately than if you visualize the whole picture and report. And because the model examines smaller data, it can learn more easily and has more samples to train. to.”

While Liao finds the IT aspects of this project fascinating, one of his main motivations is to “develop clinically meaningful technology applicable to the real world”.

The model could have very broad applicability, according to Golland. “It could be used for any type of imagery and associated text, inside or outside the medical field. This general approach, moreover, could be applied beyond images and text, which is exciting to think about. ”


Anticipate heart failure using machine learning


More information:
Ruizhi Liao et al, Learning multimodal representation via maximization of local mutual information, arXiv: 2103.04537v3 [cs.CV] arxiv.org/abs/2103.04537

Provided by the Massachusetts Institute of Technology


Quote: Using AI and Old Reports to Understand New Medical Images (2021, September 27) retrieved September 27, 2021 from https://techxplore.com/news/2021-09-ai-medical-images .html

This document is subject to copyright. Other than fair use for private study or research purposes, no part may be reproduced without written permission. The content is provided for information only.

[ad_2]

Share.

Leave A Reply