Improving AI models’ ability to elucidate their predictions | MIT News

In high-stakes settings like medical diagnostics, users often need to know what led a pc vision model to make a certain prediction, so that they can determine whether to trust its output.

Concept bottleneck modeling is one method that allows artificial intelligence systems to elucidate their decision-making process. These methods force a deep-learning model to make use of a set of concepts, which may be understood by humans, to make a prediction. In recent research, MIT computer scientists developed a technique that coaxes the model to realize higher accuracy and clearer, more concise explanations.

The concepts the model uses are often defined prematurely by human experts. As an illustration, a clinician could suggest using concepts like “clustered brown dots” and “variegated pigmentation” to predict that a medical image shows melanoma.

But previously defined concepts might be irrelevant or lack sufficient detail for a particular task, reducing the model’s accuracy. The brand new method extracts concepts the model has already learned while it was trained to perform that exact task, and forces the model to make use of those, producing higher explanations than standard concept bottleneck models.

The approach utilizes a pair of specialised machine-learning models that routinely extract knowledge from a goal model and translate it into plain-language concepts. Ultimately, their technique can convert any pretrained computer vision model into one which can use concepts to elucidate its reasoning.

“In a way, we wish to give you the option to read the minds of those computer vision models. An idea bottleneck model is a technique for users to inform what the model is considering and why it made a certain prediction. Because our method uses higher concepts, it could result in higher accuracy and ultimately improve the accountability of black-box AI models,” says lead creator Antonio De Santis, a graduate student at Polytechnic University of Milan who accomplished this research while a visiting graduate student within the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT.

He’s joined on a paper concerning the work by Schrasing Tong SM ’20, PhD ’26; Marco Brambilla, professor of computer science and engineering at Polytechnic University of Milan; and senior creator Lalana Kagal, a principal research scientist in CSAIL. The research will probably be presented on the International Conference on Learning Representations.

Constructing a greater bottleneck

Concept bottleneck models (CBMs) are a preferred approach for improving AI explainability. These techniques add an intermediate step by forcing a pc vision model to predict the concepts present in a picture, then use those concepts to make a final prediction.

This intermediate step, or “bottleneck,” helps users understand the model’s reasoning.

For instance, a model that identifies bird species could select concepts like “yellow legs” and “blue wings” before predicting a barn swallow.

But because these concepts are sometimes generated prematurely by humans or large language models (LLMs), they may not fit the particular task. As well as, even when given a set of pre-defined concepts, the model sometimes utilizes undesirable learned information anyway, which is an issue often called information leakage.

“These models are trained to maximise performance, so the model might secretly use concepts we’re unaware of,” De Santis explains.

The MIT researchers had a distinct idea: For the reason that model has been trained on an enormous amount of knowledge, it can have learned the concepts needed to generate accurate predictions for the actual task at hand. They sought to construct a CBM by extracting this existing knowledge and converting it into text a human can understand.

In step one of their method, a specialized deep-learning model called a sparse autoencoder selectively takes essentially the most relevant features the model learned and reconstructs them right into a handful of concepts. Then, a multimodal LLM describes each concept in plain language.

This multimodal LLM also annotates images within the dataset by identifying which concepts are present and absent in each image. The researchers use this annotated dataset to coach an idea bottleneck module to acknowledge the concepts.

They incorporate this module into the goal model, forcing it to make predictions using only the set of learned concepts the researchers extracted.

Controlling the concepts

They overcame many challenges as they developed this method, from ensuring the LLM annotated concepts accurately to determining whether the sparse autoencoder had identified human-understandable concepts.

To forestall the model from using unknown or unwanted concepts, they restrict it to make use of only five concepts for every prediction. This also forces the model to decide on essentially the most relevant concepts and makes the reasons more comprehensible.

Once they compared their approach to state-of-the-art CBMs on tasks like predicting bird species and identifying skin lesions in medical images, their method achieved the very best accuracy while providing more precise explanations.

Their approach also generated concepts that were more applicable to the photographs within the dataset. 

“We’ve shown that extracting concepts from the unique model can outperform other CBMs, but there continues to be a tradeoff between interpretability and accuracy that should be addressed. Black-box models that should not interpretable still outperform ours,” De Santis says.

In the longer term, the researchers want to check potential solutions to the knowledge leakage problem, perhaps by adding additional concept bottleneck modules so unwanted concepts can’t leak through. Additionally they plan to scale up their method through the use of a bigger multimodal LLM to annotate a much bigger training dataset, which could boost performance.

“I’m excited by this work since it pushes interpretable AI in a really promising direction and creates a natural bridge to symbolic AI and knowledge graphs,” says Andreas Hotho, professor and head of the Data Science Chair on the University of Würzburg, who was not involved with this work. “By deriving concept bottlenecks from the model’s own internal mechanisms quite than only from human-defined concepts, it offers a path toward explanations which are more faithful to the model and opens many opportunities for follow-up work with structured knowledge.”

This research was supported by the Progetto Rocca Doctoral Fellowship, the Italian Ministry of University and Research under the National Recovery and Resilience Plan, Thales Alenia Space, and the European Union under the NextGenerationEU project.

Related Post

Leave a Reply