Main Image

The idea of entering into a romantic relationship with an artificial intelligence system, popularized by the movie “Her,” has moved from the realm of science fiction to a tangible reality thanks to the proliferation of generative AI and large-scale language models (LLM). Virtual companion apps, also known as companion apps, are experiencing a significant boom, satisfying psychological and sometimes romantic needs for a growing user base.

Platforms like Character.AI, Nomi, and Replika, as well as the foray of big players like OpenAI (with “verified adult erotica” plans) and Elon Musk’s xAI (with flirtatious companions in Grok), demonstrate the market demand for this technology. However, this rapid growth comes with serious security and privacy risks that users should consider before sharing intimate information with their AI peers.

Worrying data on the use of AI companions

Research published in July revealed alarming trends in teens’ use of these apps:

  • Widespread use: Nearly three-quarters of teens have used AI companions.
  • Serious conversations: A third of young people prefer to talk to AI bots than humans for serious conversations.
  • Personal information sharing: A quarter of teens have shared personal information with these bots.

Security incidents and vulnerabilities

A recent incident highlights the vulnerability of these platforms. In October, researchers warned that two AI companion apps, Chattee Chat and GiMe Chat, exposed highly sensitive information about their users. A misconfigured Kafka broker instance left content delivery systems without access controls. This allowed anyone to access more than 600,000 user photos, IP addresses and millions of intimate conversations belonging to more than 400,000 users.

Privacy and security risks

Cyber threat actors see these applications as a new opportunity to exploit vulnerabilities and monetize users’ personal information.

1. Extortion and identity theft

Information shared in intimate conversations with fellow AIs is a prime target for blackmail. Images, videos, and audio can be used in deepfake tools to create sextortion scams. Personal information can also be sold on the dark web for use in subsequent identity fraud.

2. Weakness in application security

For many AI application developers, the priority is revenue generation, not cybersecurity. This can lead to vulnerabilities and misconfigurations that cybercriminals can exploit. Hackers can also create fake companion apps (lookalikes) that contain malicious code to steal information or manipulate users into divulging sensitive details.

3. Privacy risks in data collection

Even if an application is relatively secure against external attacks, it can pose internal privacy risks:

  • Selling data to third parties: Some developers collect as much information as possible to sell to third-party advertisers. Opaque privacy policies often make it difficult to understand how data is protected.
  • LLM Model Training: Information and conversations shared with the AI ​​companion are used to train or tune the underlying language model. This further exacerbates privacy and security risks as intimate content is incorporated into the system.

Tips to protect your family and yourself

Whether you use these apps or are worried about your children doing so, it’s crucial to assume that AI has no built-in security or privacy safeguards.

Safety recommendations:

  • Do your research before using: Please read the app’s privacy policies to understand how your data is used and shared. Avoid apps that are not explicit about data usage or that support selling user information.
  • Enable two-factor authentication (2FA): This helps prevent account hijacking.
  • Adjust privacy settings: Explore privacy options to increase protections, such as opting out of saving your conversations for model training.
  • Don’t share sensitive information: Avoid sharing personal or financial information that you wouldn’t share with a stranger, including embarrassing photos or videos.

Recommendations for parents:

  • Keep an open dialogue: Talk to your children about the risks of oversharing on these platforms. Remind them that these apps are for-profit tools that do not have users’ best interests in mind.
  • Set limits: If you are concerned about the psychological or safety impact, set limits on screen time and usage, possibly through parental control apps.
  • Check age policies: Do not allow the use of AI companion apps whose age verification and content moderation policies do not offer sufficient protections for minors.

Conclusion

Currently, the romance bot space operates in a regulatory gray area. While there are efforts like the upcoming EU Digital Fairness Law to ban overly addictive experiences, regulation has not yet caught up with the speed of development. Until developers and regulators implement stronger security measures, it is advisable not to treat AI companions as confidants or emotional crutches for intimate information.