A brand new study published within the Proceedings of the National Academy of Sciences (PNAS) found AI-generated messages made recipients feel more “heard” than messages generated by untrained humans, and that AI was higher at detecting emotions than these individuals. Nonetheless, recipients reported feeling less heard once they learned a message got here from AI.
As AI becomes more ubiquitous in day by day life, understanding its potential and limitations in meeting human psychological needs becomes more pertinent. With dwindling empathetic connections in a fast-paced world, many are finding their human needs for feeling heard and validated increasingly unmet.
The research conducted by Yidan Yin, Nan Jia, and Cheryl J. Wakslak from the USC Marshall School of Business addresses a pivotal query: Can AI, which lacks human consciousness and emotional experience, reach making people feel heard and understood?
“Within the context of an increasing loneliness epidemic, a big a part of our motivation was to see whether AI can actually help people feel heard,” said the paper’s first creator, Yidan Yin, a postdoctoral researcher on the Lloyd Greif Center for Entrepreneurial Studies at USC Marshall.
The team’s findings highlight not only the potential of AI to enhance human capability for understanding and communication, but raises vital conceptual questions on the meaning of being heard and practical questions on how best to leverage AI’s strengths to support greater human flourishing.
In an experiment and subsequent follow-up study, “we identified that while AI demonstrates enhanced potential in comparison with non-trained human responders to offer emotional support, the devaluation of AI responses poses a key challenge for effectively deploying AI’s capabilities,” said Nan Jia, associate professor of strategic management.
The USC Marshall research team investigated people’s feelings of being heard and other related perceptions and emotions after receiving a response from either AI or a human. The survey varied each the actual source of the message and the ostensible source of the message: Participants received messages that were actually generated by an AI or by a human responder, with the knowledge that it was either AI or human generated.
“What we found was that each the actual source of the message and the presumed source of the message played a task,” said Cheryl Wakslak, associate professor of management and organization at USC Marshall. “People felt more heard once they received an AI than a human message, but once they believed a message got here from AI this made them feel less heard.”
AI bias
Yin noted that their research “mainly finds a bias against AI. It’s useful, but they don’t love it.”
Perceptions about AI are certain to alter, added Wakslak, “In fact these effects may change over time, but one in all the interesting things we found was that the 2 effects we observed were fairly similar in magnitude. Whereas there may be a positive effect of getting an AI message, there may be the same degree of response bias when a message is identified as coming from AI, leading the 2 effects to essentially cancel one another out.”
Individuals further reported an “uncanny valley” response — a way of unease when made aware that the empathetic response originated from AI, highlighting the complex emotional landscape navigated by AI-human interactions.
The research survey also asked participants about their general openness to AI, which moderated a number of the effects, explained Wakslak.
“Individuals who feel more positively toward AI don’t exhibit the response penalty as much and that is intriguing because over time, will people gain more positive attitudes toward AI?” she posed. “That is still to be seen … but it can be interesting to see how this plays out as people’s familiarity and experience with AI grows.”
AI offers higher emotional support
The study highlighted vital nuances. Responses generated by AI were related to increased hope and lessened distress, indicating a positive emotional effect on recipients. AI also demonstrated a more disciplined approach than humans in offering emotional support and shunned making overwhelming practical suggestions.
Yin explained that, “Paradoxically, AI was higher at using emotional support strategies which were shown in prior research to be empathetic and validating. Humans may potentially learn from AI because a number of times when our significant others are complaining about something, we would like to offer that validation, but we do not know the right way to effectively achieve this.”
As an alternative of AI replacing humans, the research points to different benefits of AI and human responses. The advanced technology could turn out to be a invaluable tool, empowering humans to make use of AI to assist them higher understand each other and learn the right way to respond in ways in which provide emotional support and show understanding and validation.
Overall, the paper’s findings have vital implications for the mixing of AI into more social contexts. Leveraging AI’s capabilities might provide a reasonable scalable solution for social support, especially for individuals who might otherwise lack access to individuals who can provide them with such support. Nonetheless, because the research team notes, their findings suggest that it’s critical to provide careful consideration to how AI is presented and perceived with a view to maximize its advantages and reduce any negative responses.