Enabling AI to clarify its predictions in plain language | MIT News

Machine-learning models could make mistakes and be difficult to make use of, so scientists have developed explanation methods to assist users understand when and the way they need to trust a model’s predictions.

These explanations are sometimes complex, nevertheless, perhaps containing details about tons of of model features. And so they are sometimes presented as multifaceted visualizations that might be difficult for users who lack machine-learning expertise to totally comprehend.

To assist people make sense of AI explanations, MIT researchers used large language models (LLMs) to rework plot-based explanations into plain language.

They developed a two-part system that converts a machine-learning explanation right into a paragraph of human-readable text after which mechanically evaluates the standard of the narrative, so an end-user knows whether to trust it.

By prompting the system with just a few example explanations, the researchers can customize its narrative descriptions to satisfy the preferences of users or the necessities of specific applications.

In the long term, the researchers hope to construct upon this method by enabling users to ask a model follow-up questions on the way it got here up with predictions in real-world settings.

“Our goal with this research was to take step one toward allowing users to have full-blown conversations with machine-learning models concerning the reasons they made certain predictions, so that they could make higher decisions about whether to take heed to the model,” says Alexandra Zytek, an electrical engineering and computer science (EECS) graduate student and lead writer of a paper on this method.

She is joined on the paper by Sara Pido, an MIT postdoc; Sarah Alnegheimish, an EECS graduate student; Laure Berti-Équille, a research director on the French National Research Institute for Sustainable Development; and senior writer Kalyan Veeramachaneni, a principal research scientist within the Laboratory for Information and Decision Systems. The research can be presented on the IEEE Big Data Conference.

Elucidating explanations

The researchers focused on a well-liked variety of machine-learning explanation called SHAP. In a SHAP explanation, a worth is assigned to each feature the model uses to make a prediction. As an illustration, if a model predicts house prices, one feature may be the placement of the home. Location can be assigned a positive or negative value that represents how much that feature modified the model’s overall prediction.

Often, SHAP explanations are presented as bar plots that show which features are most or least essential. But for a model with greater than 100 features, that bar plot quickly becomes unwieldy.

“As researchers, we now have to make a whole lot of selections about what we’re going to present visually. If we decide to indicate only the highest 10, people might wonder what happened to a different feature that isn’t within the plot. Using natural language unburdens us from having to make those selections,” Veeramachaneni says.

Nonetheless, slightly than utilizing a big language model to generate an evidence in natural language, the researchers use the LLM to rework an existing SHAP explanation right into a readable narrative.

By only having the LLM handle the natural language a part of the method, it limits the chance to introduce inaccuracies into the reason, Zytek explains.

Their system, called EXPLINGO, is split into two pieces that work together.

The primary component, called NARRATOR, uses an LLM to create narrative descriptions of SHAP explanations that meet user preferences. By initially feeding NARRATOR three to 5 written examples of narrative explanations, the LLM will mimic that style when generating text.

“Somewhat than having the user attempt to define what variety of explanation they’re on the lookout for, it is simpler to only have them write what they need to see,” says Zytek.

This enables NARRATOR to be easily customized for brand new use cases by showing it a special set of manually written examples.

After NARRATOR creates a plain-language explanation, the second component, GRADER, uses an LLM to rate the narrative on 4 metrics: conciseness, accuracy, completeness, and fluency. GRADER mechanically prompts the LLM with the text from NARRATOR and the SHAP explanation it describes.

“We discover that, even when an LLM makes a mistake doing a task, it often won’t make a mistake when checking or validating that task,” she says.

Users may also customize GRADER to provide different weights to every metric.

“You may imagine, in a high-stakes case, weighting accuracy and completeness much higher than fluency, for instance,” she adds.

Analyzing narratives

For Zytek and her colleagues, one among the largest challenges was adjusting the LLM so it generated natural-sounding narratives. The more guidelines they added to manage style, the more likely the LLM would introduce errors into the reason.

“Loads of prompt tuning went into finding and fixing each mistake one after the other,” she says.

To check their system, the researchers took nine machine-learning datasets with explanations and had different users write narratives for every dataset. This allowed them to judge the power of NARRATOR to mimic unique styles. They used GRADER to attain each narrative explanation on all 4 metrics.

Ultimately, the researchers found that their system could generate high-quality narrative explanations and effectively mimic different writing styles.

Their results show that providing just a few manually written example explanations greatly improves the narrative style. Nonetheless, those examples should be written rigorously — including comparative words, like “larger,” may cause GRADER to mark accurate explanations as incorrect.

Constructing on these results, the researchers need to explore techniques that would help their system higher handle comparative words. Additionally they need to expand EXPLINGO by adding rationalization to the reasons.

In the long term, they hope to make use of this work as a stepping stone toward an interactive system where the user can ask a model follow-up questions on an evidence.

“That will help with decision-making in a whole lot of ways. If people disagree with a model’s prediction, we wish them to give you the option to quickly determine if their intuition is correct, or if the model’s intuition is correct, and where that difference is coming from,” Zytek says.