Complex, unfamiliar sentences make the brain’s language network work harder

With help from a man-made language network, MIT neuroscientists have discovered what form of sentences are almost definitely to fireplace up the brain’s key language processing centers.

The brand new study reveals that sentences which are more complex, either because of bizarre grammar or unexpected meaning, generate stronger responses in these language processing centers. Sentences which are very straightforward barely engage these regions, and nonsensical sequences of words don’t do much for them either.

For instance, the researchers found this brain network was most energetic when reading unusual sentences similar to “Buy sell signals stays a specific,” taken from a publicly available language dataset called C4. Nevertheless, it went quiet when reading something very straightforward, similar to “We were sitting on the couch.”

“The input must be language-like enough to interact the system,” says Evelina Fedorenko, Associate Professor of Neuroscience at MIT and a member of MIT’s McGovern Institute for Brain Research. “After which inside that space, if things are very easy to process, then you definitely don’t have much of a response. But when things get difficult, or surprising, if there’s an unusual construction or an unusual set of words that you just’re possibly not very aware of, then the network has to work harder.”

Fedorenko is the senior creator of the study, which appears today in Nature Human Behavior. MIT graduate student Greta Tuckute is the lead creator of the paper.

Processing language

On this study, the researchers focused on language-processing regions present in the left hemisphere of the brain, which incorporates Broca’s area in addition to other parts of the left frontal and temporal lobes of the brain.

“This language network is very selective to language, but it surely’s been harder to really determine what is happening in these language regions,” Tuckute says. “We desired to discover what sorts of sentences, what sorts of linguistic input, drive the left hemisphere language network.”

The researchers began by compiling a set of 1,000 sentences taken from a wide selection of sources — fiction, transcriptions of spoken words, web text, and scientific articles, amongst many others.

Five human participants read each of the sentences while the researchers measured their language network activity using functional magnetic resonance imaging (fMRI). The researchers then fed those self same 1,000 sentences right into a large language model — a model much like ChatGPT, which learns to generate and understand language from predicting the subsequent word in huge amounts of text — and measured the activation patterns of the model in response to every sentence.

Once that they had all of those data, the researchers trained a mapping model, referred to as an “encoding model,” which relates the activation patterns seen within the human brain with those observed in the factitious language model. Once trained, the model could predict how the human language network would reply to any recent sentence based on how the factitious language network responded to those 1,000 sentences.

The researchers then used the encoding model to discover 500 recent sentences that might generate maximal activity within the human brain (the “drive” sentences), in addition to sentences that might elicit minimal activity within the brain’s language network (the “suppress” sentences).

In a bunch of three recent human participants, the researchers found these recent sentences did indeed drive and suppress brain activity as predicted.

“This ‘closed-loop’ modulation of brain activity during language processing is novel,” Tuckute says. “Our study shows that the model we’re using (that maps between language-model activations and brain responses) is accurate enough to do that. That is the primary demonstration of this approach in brain areas implicated in higher-level cognition, similar to the language network.”

Linguistic complexity

To determine what made certain sentences drive activity greater than others, the researchers analyzed the sentences based on 11 different linguistic properties, including grammaticality, plausibility, emotional valence (positive or negative), and the way easy it’s to visualise the sentence content.

For every of those properties, the researchers asked participants from crowd-sourcing platforms to rate the sentences. In addition they used a computational technique to quantify each sentence’s “surprisal,” or how unusual it’s in comparison with other sentences.

This evaluation revealed that sentences with higher surprisal generate higher responses within the brain. That is consistent with previous studies showing people have more difficulty processing sentences with higher surprisal, the researchers say.

One other linguistic property that correlated with the language network’s responses was linguistic complexity, which is measured by how much a sentence adheres to the principles of English grammar and the way plausible it’s, meaning how much sense the content makes, other than the grammar.

Sentences at either end of the spectrum — either very simple, or so complex that they make no sense in any respect — evoked little or no activation within the language network. The biggest responses got here from sentences that make some sense but require work to figure them out, similar to “Jiffy Lube of — of therapies, yes,” which got here from the Corpus of Contemporary American English dataset.

“We found that the sentences that elicit the very best brain response have a weird grammatical thing and/or a weird meaning,” Fedorenko says. “There’s something barely unusual about these sentences.”

The researchers now plan to see in the event that they can extend these findings in speakers of languages apart from English. In addition they hope to explore what style of stimuli may activate language processing regions within the brain’s right hemisphere.

The research was funded by an Amazon Fellowship from the Science Hub, an International Doctoral Fellowship from the American Association of University Women, the MIT-IBM Watson AI Lab, the National Institutes of Health, the McGovern Institute, the Simons Center for the Social Brain, and MIT’s Department of Brain and Cognitive Sciences.