Researchers at Ben-Gurion University of the Negev have developed a computational method that enables them to “reverse engineer” the AI’s “decision” by partitioning medical images into components with distinct clinical interpretation that are important for the AI. Understanding the decision-making mechanism of AI models is key for deciphering biological processes and medical decisions.
The findings of the research were published in Nature Communications.
Deep learning, using artificial neural networks, is an AI-based computational method capable of learning patterns directly from data by imitating the human brain’s learning process. The primary drawback of using such AI-based methods is the inability to decipher the reasoning behind the neural network’s decision.
This limitation stems from the fact that the network’s training process is conducted automatically, directly from the data, without human intervention. This disadvantage poses a significant barrier to wider use in fields such as biology and medicine, where the explanation is no less important than the machine’s ability to make correct decisions.
Doctoral student Oded Rotem, under the guidance of Prof. Assaf Zaritsky from the Department of Software and Information Systems Engineering at Ben-Gurion University of the Negev, developed a computational method, called DISCOVER, to reverse engineer AI by breaking down an image into semantically meaningful components through which the AI makes its decision.
In collaboration with the Israeli startup AIVF, the researchers demonstrated the technology’s ability to characterize the in vitro fertilization (IVF) embryo’s features that were most significant to the AI in making a decision regarding the embryo’s visual quality.
To ensure that the technology can be applied to other domains beyond IVF, the researchers demonstrated the interpretation of AI decisions for MRI imaging of Alzheimer’s patients’ brains and even in images captured by a standard camera to interpret how the AI distinguishes between dogs and cats and between men and women.
The research team used a rich database of thousands of embryos collected by AIVF. The embryos were imaged using a light microscope, and embryologists at the company examined and ranked each embryo based on several characteristics, such as embryo size and the chain of cells surrounding the embryo in the initial stages of development, clinically termed the trophectoderm.
The researchers demonstrated that the AI can successfully predict embryo quality with performance like a human expert, but the AI did not offer the researchers clues about which embryo features led to successful prediction.
“Deep learning can identify hidden patterns that the human eye cannot detect in biomedical images. However, this is not enough—in order to make clinical or scientific decisions, we must decipher the mystery of discovering what the AI identified, interpret the biological or clinical significance of the explanation, and decide based on the interpretation on the next steps in treatment or research,” explained Prof. Zaritsky.
DISCOVER’s interpretability mechanism relies on “deepfake” generative AI, which allows, for example, to replace one person’s face in an image with another. More specifically, a second neural network can create synthetic images of embryos in a controlled manner.
The creation of the images is based on the definition of certain components in the network, so that each component will be significant in predicting the embryo’s quality on the one hand and will encode meaningful parts of the image on the other. Each such component encodes unique parts of the image under the assumption that they will translate into clear and distinct interpretable properties.
Gradually changing these components, one component at a time, allows for the generation of images of embryos, each of which is different from the true image in one property that is important to the AI’s decision-making process.
Thus, it is possible to present the same embryo to an expert in several ways, so that in each image one property is artificially “amplified” while the rest of the image remains unchanged. This method allows the expert to interpret the AI’s mode of operation and provides objective measurements that indicate the importance of each property in the decision.
By creating a series of “fake” images of embryos that never existed, the researchers were able to identify changes in the embryo’s size and the chain of cells surrounding it—in accordance with the embryologist’s decision in the clinic.
In addition, the researchers were able to identify a new property that the AI identified as an important indicator of the embryo’s quality without human guidance—a specific structure of an internal cavity in the embryo that contains nutrients for the inner mass of cells, clinically described as “blastocyst density.”
“Embryologists are well aware of the importance of certain biological features in determining embryo quality, but the human eye is often limited in its ability to measure and assess them accurately,” explained Daniela Gilboa, CEO of AIVF and a clinical embryologist by training.
“A prime example of this is blastocyst density, a feature of great importance in embryo quality that is not widely used clinically because it is very difficult to quantify when visually examining the embryo in the laboratory. Now, with the visual interpretation of DISCOVER, it is possible to identify and analyze important biological properties more accurately and objectively.
“As a result, we can improve the process of selecting the embryo with the highest chances of successful implantation in the uterus, thereby increasing the chances of success in fertility treatments.”
“DISCOVER’s abilities to identify and artificially amplify image patterns that are important to the AI can be applied to other domains of biological and medical imaging, where AI is broadly applied,” noted Oded Rotem, the doctoral student who conceived and developed the method.
Dr. Galit Mazuz Perlmutter, from BGN, the commercialization company of Ben-Gurion University of the Negev, also noted the inherent potential of DISCOVER, saying, “The technology developed by Prof. Zaritsky’s lab has translational importance for various medical applications.”
More information:
Oded Rotem et al, Visual interpretability of image-based classification models by generative latent space disentanglement applied to in vitro fertilization, Nature Communications (2024). DOI: 10.1038/s41467-024-51136-9
Citation:
Computational method sheds light on how AI helps doctors decipher medical images (2024, September 9)
retrieved 9 September 2024
from https://medicalxpress.com/news/2024-09-method-ai-doctors-decipher-medical.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.