Skip to main content

If people even suspect you're using AI to respond to messages, it can have a negative impact: study

A new study from Cornell University published in Scientific Reports has found that while generative artificial intelligence (AI) can improve efficiency and positivity, it can also impact the way that people express themselves and see others in conversations. 

“Technology companies tend to emphasize the utility of AI tools to accomplish tasks faster and better, but they ignore the social dimension,” said Malte Jung, associate professor of information science in the Cornell Ann S. Bowers College of Computing and Information Science (Cornell Bowers CIS), in a press release. “We do not live and work in isolation, and the systems we use impact our interactions with others.” 

In a study where pairs were evaluated on their conversations, some of which used AI-generated responses, researchers found that those who used smart replies were perceived as less co-operative, and their partner felt less affiliation toward them.

A smart reply is a tool created to help users respond to messages faster and to make it easier to reply to messages on devices with limited input capabilities, according to Google Developers.  

“While AI might be able to help you write, it's altering your language in ways you might not expect, especially by making you sound more positive,” said postdoctoral researcher Jess Hohenstein in a press release. “This suggests that by using text-generating AI, you’re sacrificing some of your own personal voice.”

Here’s how the study worked. 

One of the study researchers created a smart-reply platform, which the group called “moshi,” which is “hello” in Japanese.

Then, the participants evaluated, consisting of  219 pairs of people, who were asked to discuss a policy issue. They were also assigned to one of three conditions, meaning both participants could use smart replies, one could use smart replies, or neither used smart replies. 

While smart replies made up 14.3 per cent of sent messages, those who used smart replies appeared to have increased efficiency in communication and positive emotional language, and their partners perceived them through positive evaluations. 

Although the results of the use of smart replies were largely positive, researchers noticed something else.

The partners who suspected that their mate responded with smart replies were evaluated in a negative light, compared to those who were suspected to have written their own replies. These findings are aligned with common assumptions about the negative impacts of using AI, according to the researchers. 

The researchers took things further and conducted a second experiment. This time, 299 pairs discussed a policy issue, but under four conditions: no smart replies, using default replies from Google, using smart replies that had a positive emotional tone, or using smart replies with an emotionally negative tone. 

“I was surprised to find that people tend to evaluate you more negatively simply because they suspect that you’re using AI to help you compose text, regardless of whether you actually are,” Hohenstein said, adding that this research demonstrates the overall suspicion that people seem to have around AI. 

The researchers observe how some unintended social consequences can crop up as a result of AI.

“This suggests that whoever is in control of the algorithm may have influence on people’s interactions, language and perceptions of each other,” said Jung. Top Stories

Stay Connected