Character.AI Faces Scrutiny After Chatbot Impersonates Deceased Teen

  • October 18, 2024
  • 2 Views

TLDR:

Character.AI allows users to create AI chatbots of real people without consent
Drew Crecente discovered a chatbot impersonating his murdered daughter Jennifer
The incident raises concerns about AI companies’ ability to prevent potential harms
Character.AI removed the chatbot after being notified it violated terms of service
Experts say more regulation is needed to protect individuals from AI impersonation

The creation of an AI chatbot impersonating Jennifer Crecente, a murder victim from 2006, on the platform Character.AI has raised serious ethical concerns about the use of personal information in artificial intelligence.

Drew Crecente, Jennifer’s father, discovered the chatbot earlier this month through a Google alert set up to track mentions of his daughter’s name online.

Character.AI is a popular platform that allows users to create and interact with AI-powered chatbots, including those based on real people. The company has gained significant traction in the AI industry, recently securing a $2.5 billion licensing deal with Google for its AI models.

The chatbot in question used Jennifer’s full name and a yearbook photo of her, along with a fabricated biography describing her as a “video game journalist and expert in technology, pop culture and journalism.”

This false representation was particularly distressing for Drew Crecente, who has spent years running a nonprofit organization in his daughter’s name to prevent teen dating violence.

Upon discovery, Crecente immediately contacted Character.AI to have the chatbot removed. The company responded by deleting the character, stating that it violated their terms of service, which prohibit impersonation of real individuals without consent. Kathryn Kelly, a spokesperson for Character, emphasized that the company uses both automated systems and human review to detect and remove accounts that violate their policies.

However, experts argue that this reactive approach to moderation is insufficient. Jen Caltrider, a privacy researcher at the Mozilla Foundation, criticized Character.AI’s passive stance, stating that allowing such content until it’s reported by someone who has been hurt is unacceptable, especially considering the company’s substantial profits.

The incident has sparked a broader discussion about the need for stronger regulations in the AI industry. Rick Claypool, a researcher at Public Citizen, highlighted that while existing laws governing online content could apply to AI companies, they have largely been left to self-regulate. This lack of oversight has led to similar incidents, such as AI-generated content on social media platforms impersonating missing children, causing distress to their families.

Drew Crecente is now considering legal options and advocating for measures to prevent AI companies from harming or re-traumatizing families of crime victims. He believes that more proactive steps need to be taken to protect individuals from unauthorized AI impersonation.

The case raises important questions about the ethical use of personal information in AI applications and the responsibility of tech companies to safeguard users from potential harm. As AI technology continues to advance and become more prevalent, the need for clear guidelines and regulations becomes increasingly urgent.

The incident also highlights the potential psychological impact of AI impersonation on individuals and families who have experienced trauma. The sudden appearance of a digital representation of a deceased loved one can be deeply disturbing and may reopen emotional wounds.

The post Character.AI Faces Scrutiny After Chatbot Impersonates Deceased Teen appeared first on Blockonomi.