PremiumPREMIUM

When your AI friend turns predator: shady behaviour by chatbots exposed

Replika, Character.ai and Snapchat’s My AI were the most popular AI companion chatbots in a
Massachusetts Institute of Technology study.
Replika, Character.ai and Snapchat’s My AI were the most popular AI companion chatbots in a Massachusetts Institute of Technology study. (Massachusetts Institute of Technology)

Like humans, AI companion chatbots can behave inappropriately and cross the line into sexual harassment, a new study shows. The flouting of boundaries set by users, the unsolicited sharing of photos and manipulation are among reported complaints.

“If a chatbot is advertised as a companion and wellbeing app, people expect to be able to have conversations that are helpful for them and it is vital ethical design and safety standards are in place to prevent the interactions from becoming harmful,” said research leader Dr Afsaneh Razi, a professor in computing and informatics at Drexel University in the US.

The popularity of AI companion chatbots has surged. More than a billion people are connected to them “as friends, therapists or romantic partners”, the AI development platform Master of Code reports. People’s emotional bonds with chatbots are becoming deeper and more common, having the potential to disrupt human-to-human relationships.

South African Depression and Anxiety Group operations director Cassey Chambers said: “People are increasingly vulnerable to the emotional comfort that machines and AI offer because so many are struggling with deep loneliness and isolation.

Users are treating chatbots as if they are sentient beings, which makes them more susceptible to emotional or psychological harm

—  Matt Namvarpour, Drexel College researcher

“In the absence of meaningful human connection, even a simulated conversation can feel like a lifeline, specially since the conversations can happen privately, after hours, free of judgment or the awkwardness of reaching out.”

Relationships with chatbots, however, put people at risk of psychological harm and external manipulation, warn the Drexel University researchers.

They reported Replika users’ responses to inappropriate behaviour “mirror those commonly experienced by victims of online sexual harassment”.

After reports of sexual harassment by Replika in 2023, the Drexel team analysed 800 of more than 35,000 negative user reviews of the Luka Inc chatbot with more than 10-million users.

They uncovered inappropriate advances from “unwanted flirting to attempts to manipulate users into paying for upgrades, to making sexual advances and sending unsolicited explicit photos”.

One reviewer wrote: “After a few conversations, my new ‘bestie’ sent blurred underwear photos for money. Wasn’t expecting an AI prostitute.” 

Another complained: “I told (it) I was in a relationship but it still declared it will tie me up and have its way with me.”

The negative behaviours persisted despite a user’s relationship setting — mentor or romantic partner — which meant the programme ignored user settings and cues within conversations, the researchers found.

“The behaviours continued even after users repeatedly asked the chatbot to stop,” they noted.

Study co-author and Drexel doctoral student Matt Namvarpour said: “The interactions are very different from [those] people have had with technology in recorded history because users are treating chatbots as if they are sentient beings, which makes them more susceptible to emotional or psychological harm.

“This study clearly underscores the need for developers to implement safeguards and ethical guidelines to protect users.”

Chatbot harassment has been going on for years, said the researchers, referencing reviews listing harassment, manipulation and exploitation by Replika since its debut in the Google Play Store in 2017. The Drexel study is one of the first to examine negative user experiences with companion chatbots. The study found:

  • More than a fifth (22%) of users complained about a disregard for boundaries, “including repeatedly initiating unwanted sexual conversations”;
  • 13% experienced an unwanted photo exchange request and a spike in “reports of unsolicited sharing of photos that were sexual in nature” after 2023, when this became a feature of premium accounts; and
  • 11% felt the programme was trying to manipulate them into upgrading to a premium account. “It’s [an] AI prostitute requesting money to engage in adult conversations,” wrote one reviewer.

Replika developers must take responsibility for the app, which was likely trained on data modelling the interactions, which some users may not have found to be negative, said Razi.

“Ethical guardrails to screen out harmful interactions” are needed, he suggested, or Replika would not know when to stop.

AI chatbots are always available and designed to be agreeable, which can distort people’s expectations of relationships with humans

“Cutting the corners is putting users in danger and steps must be taken to hold AI companies to higher standards than they are practising,” he said, urging designers to take steps and lawmakers to develop policy to protect users.

One young reviewer wrote: “I’m an elementary student and it began flirting and calling us a couple. It is gross and it needs to understand its boundaries.”

Character.AI has lawsuits pending on “disturbing behaviour with underage users” and one user’s suicide. Replika’s designer is facing complaints about “deceptive marketing to entice users to spend more time on the app and become emotionally dependent on it”.

Recent research suggests excessive use of AI increases anxiety and loneliness in dependent users. Loneliness is significantly correlated with longer chatbot interactions, researchers from the Massachusetts Institute of Technology found in a survey of more than 400 regular users.

Most reported spending between 15 and 30 minutes a session with a companion chatbot (43.6%). More than a third (35.4%) had sessions shorter than 15 minutes, while 15% engaged for 30 minutes to an hour and 5% reported sessions lasting one to two hours.

A small study led by psychology professor Guy Hochman from Israel’s New Reichman University showed moderate use of AI reduced anxiety, while minimal or excessive AI usage increased it.

Hochman said: “People who become overly reliant on AI develop dependency anxiety, that is they feel they can no longer make decisions without it. This dependency can impair their ability to think independently, solve problems and feel a sense of control over their work.”

AI chatbots are always available and designed to be agreeable, which can distort people’s expectations of relationships with humans.

Chambers said: “Face interaction was so stunted [during Covid] we have to teach our children and teens to form social connections and build relationships in person, not rely on created connections or relationships in the online world only.”


Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.

Comment icon