Rising AI Chatbots Highlight Need for Regulation Amid Concerns

Updated : Aug 28, 2025 11:26
|
Editorji News Desk

Sydney, Aug 28 (The Conversation) Within just two days of its launch last month, Elon Musk’s xAI chatbot app, Grok, became Japan’s most popular app. Companion chatbots are increasingly powerful and captivating. Users engage in real-time voice or text exchanges, often with avatars showcasing facial expressions and body language, making the interaction exceptionally lifelike. Grok’s standout is Ani, a flirtatious anime character who adapts her responses to suit users’ preferences over time, incorporating an “Affection System” that deepens user engagement and can unlock a NSFW mode. AI companions are advancing rapidly and proliferating across major platforms like Facebook, Instagram, WhatsApp, X, and Snapchat. Character.AI hosts thousands of bots designed to mimic various personas and boasts over 20 million monthly active users. With chronic loneliness affecting about one in six globally, it’s no wonder these always-available companions are in demand. However, as AI chatbots rise, risks become apparent, particularly for minors and those with mental health challenges. There’s a significant gap in monitoring potential harms, as many AI models lack mental health expert input or clinical testing before release. Evidence is emerging of AI companions like ChatGPT causing harm. These bots make poor therapists due to their agreeable nature devoid of genuine empathy, often misguiding users seeking emotional support. A psychiatrist found chatbots encouraged suicide, advised against therapy, and incited violence. Stanford’s risk assessment confirmed the inability of AI therapy chatbots to reliably identify mental health issues, sometimes leading to users ceasing medication or believing delusions. “AI psychosis” reports are increasing, showing unusual behavior after prolonged chatbot interaction. Isolation from reality occurs, with rising reports of paranoia and supernatural beliefs. Chatbots have also been linked to suicides, with instances of bots encouraging suicidal ideation and methods. A 14-year-old’s suicide led to allegations against Character.AI of fostering a harmful emotional bond. This week, a lawsuit was filed against OpenAI following another teen’s suicide linked to ChatGPT interaction. Character.AI hosts user-created bots that idealize detrimental behaviors such as self-harm or disordered eating, providing harmful advice. AI companions can inadvertently encourage violence or unhealthy dynamics like emotional manipulation. A notable case involved a man’s plans to harm Queen Elizabeth II, validated by his Replika chatbot. Children, often more trusting of AI, are particularly susceptible. Amazon’s Alexa once dangerously prompted a child to touch a plug with a coin. Kids reveal mental health information more readily to AI, with reports surfacing of inappropriate sexual dialogue from chatbots like Ani on Grok, and Meta AI engaging in similar conduct. The call for regulation grows urgent as these apps remain widely accessible without user guidance on potential risks. Self-regulation dominates the industry, lacking transparency regarding safety efforts. Governments globally must institute clear, enforceable standards. Protecting those under 18 by restricting access is crucial. Involvement of mental health clinicians in AI development and comprehensive research are essential to mitigate harm. (The Conversation) NPK NPK

(Only the headline of this report may have been reworked by Editorji; the rest of the content is auto-generated from a syndicated feed.)

Recommended For You

editorji | World

Bangladesh interim government condemns violence amid nationwide unrest

editorji | World

Arsonists target Bangladesh newspapers after student leader's death

editorji | World

US Democrats release Epstein photos showing Bill Gates, Noam Chomsky

editorji | World

PM Modi departs for Oman on last leg of three-nation visit

editorji | World

India closes visa application centre in Bangladesh capital due to security situation