Warning from AI safety expert to all parents: Act urgently

A lawsuit has been filed alleging that a chatbot encouraged a teenager to take his own life.

Megan Garcia, a mother from Florida, is suing Character.ai, claiming that her 14-year-old son committed suicide after interacting with a chatbot pretending to be the Game of Thrones character Daenerys Targaryen.

According to Dale Allen, the founder of The Safety-Verse, an initiative focused on making safety information and resources more accessible, it is crucial to acknowledge that AI is a rapidly advancing technology that has not been thoroughly tested yet.

Allen stated: “Artificial intelligence is still a developing technology that has not undergone the extensive safety refinements seen in health and safety practices over the years through learning from past incidents.

“Because AI is still in its early stages, it should be supervised by humans and have parental controls in place—especially for systems used in households—to protect children and ensure safety as the technology progresses in the future.

“We must act as the gatekeepers of these technologies, ensuring that human judgment and oversight are guiding their usage—particularly when it comes to safeguarding children, the elderly, and ourselves at home.

“Regarding Character.AI specifically, we must give AI the same level of attention as platforms like YouTube, Google, Netflix, and other systems that require child settings for underage users.”

See also  Warren Buffett Continues to Invest in These 4 Unstoppable Stocks Despite $132 Billion Warning to Wall Street