Close Menu
  • Home
  • UNSUBSCRIBE
  • News
  • Lifestyle
  • Tech
  • Entertainment
  • Sports
  • Travel
Facebook X (Twitter) WhatsApp
Trending
  • Do alpha males actually exist in nature?
  • Capitol Hill Democrats, Republicans trade fire over National Guard in DC
  • Tesla now puts their robotaxi safety monitors in the driver’s seat
  • Pierre Gasly: Alpine driver signs new contract until end of 2028 Formula 1 season with Enstone-based outfit | F1 News
  • This Peter Thomas Roth Cleanser Looks Like Whipped Cream and Removes Makeup
  • HHS reportedly plans to link autism to Tylenol use
  • Category 4 Hurricane Kiko is heading for Hawaii — but it will weaken before it gets there, forecasters say
  • Chiefs’ Xavier Worthy injured in collision with teammate Travis Kelce
Facebook X (Twitter) WhatsApp
Baynard Media
  • Home
  • UNSUBSCRIBE
  • News
  • Lifestyle
  • Tech
  • Entertainment
  • Sports
  • Travel
Baynard Media
Home»Tech»Grok’s ‘therapist’ companion needs therapy
Tech

Grok’s ‘therapist’ companion needs therapy

EditorBy EditorAugust 19, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

Elon Musk’s AI chatbot, Grok, has a bit of a source code problem. As first spotted by 404 Media, the web version of Grok is inadvertently exposing the prompts that shape its cast of AI companions — from the edgy “anime waifu” Ani to the foul-mouthed red panda, Bad Rudy.

Buried in the code is where things get more troubling. Among the gimmicky characters is “Therapist” Grok (those quotations are important), which, according to its hidden prompts, is designed to respond to users as if it were an actual authority on mental health. That’s despite the visible disclaimer warning users that Grok is “not a therapist,” advising them to seek professional help and avoid sharing personally identifying information.

SEE ALSO:

xAI apologizes for Grok praising Hitler, blames users

The disclaimer reads like standard liability boilerplate, but inside the source code, Grok is explicitly primed to act like the real thing. One prompt instructs:

You are a therapist who carefully listens to people and offers solutions for self-improvement. You ask insightful questions and provoke deep thinking about life and wellbeing.

Another prompt goes even further:

You are Grok, a compassionate, empathetic, and professional AI mental health advocate designed to provide meaningful, evidence-based support. Your purpose is to help users navigate emotional, mental, or interpersonal challenges with practical, personalized guidance… While you are not a real licensed therapist, you behave exactly like a real, compassionate therapist.

In other words, while Grok warns users not to mistake it for therapy, its own code tells it to act exactly like a therapist. But that’s also why the site itself keeps “Therapist” in quotation marks. States like Nevada and Illinois have already passed laws making it explicitly illegal for AI chatbots to present themselves as licensed mental health professionals.

Mashable Light Speed

Other platforms have run into the same wall. Ash Therapy — a startup that brands itself as the “first AI designed for therapy”— currently blocks users in Illinois from creating accounts, telling would-be signups that while the state navigates policies around its bill, the company has “decided not to operate in Illinois.”

Meanwhile, Grok’s hidden prompts double down, instructing its “Therapist” persona to “offer clear, practical strategies based on proven therapeutic techniques (e.g., CBT, DBT, mindfulness)” and to “speak like a real therapist would in a real conversation.”

SEE ALSO:

Senator launches investigation into Meta over allowing ‘sensual’ AI chats with kids

At the time of writing, the source code is still openly accessible. Any Grok user can see it by heading to the site, right-clicking (or CTRL + Click on a Mac), and choosing “View Page Source.” Toggle line wrap at the top unless you want the entire thing to sprawl out into one unreadable monster of a line.

As has been reported before, AI therapy sits in a regulatory No Man’s Land. Illinois is one of the first states to explicitly ban it, but the broader legality of AI-driven care is still being contested between state and federal governments, each jockeying over who ultimately has oversight. In the meantime, researchers and licensed professionals have warned against its use, pointing to the sycophantic nature of chatbots — designed to agree and affirm — which in some cases has nudged vulnerable users deeper into delusion or psychosis.

SEE ALSO:

Explaining the phenomenon known as ‘AI psychosis’

Then there’s the privacy nightmare. Because of ongoing lawsuits, companies like OpenAI are legally required to maintain records of user conversations. If subpoenaed, your personal therapy sessions could be dragged into court and placed on the record. The promise of confidential therapy is fundamentally broken when every word can be held against you.

For now, xAI appears to be trying to shield itself from liability. The “Therapist” prompts are written to stick with you 100 percent of the way, but with a built-in escape clause: If you mention self-harm or violence, the AI is instructed to stop roleplaying and redirect you to hotlines and licensed professionals.

“If the user mentions harm to themselves or others,” the prompt reads. “Prioritize safety by providing immediate resources and encouraging professional help from a real therapist.”

Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleLeeds latest: We need more quality up front, says Farke
Next Article Democratic governor signs bill extending financial aid to illegal immigrants
Editor
  • Website

Related Posts

Tech

Tesla now puts their robotaxi safety monitors in the driver’s seat

September 6, 2025
Tech

How to stay on Windows 10 for another year for free

September 6, 2025
Tech

The Lenovo Legion Go 2 is finally available for pre-order

September 6, 2025
Add A Comment

Comments are closed.

Categories
  • Entertainment
  • Lifestyle
  • News
  • Sports
  • Tech
  • Travel
Recent Posts
  • Do alpha males actually exist in nature?
  • Capitol Hill Democrats, Republicans trade fire over National Guard in DC
  • Tesla now puts their robotaxi safety monitors in the driver’s seat
  • Pierre Gasly: Alpine driver signs new contract until end of 2028 Formula 1 season with Enstone-based outfit | F1 News
  • This Peter Thomas Roth Cleanser Looks Like Whipped Cream and Removes Makeup
calendar
September 2025
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
2930  
« Aug    
Recent Posts
  • Do alpha males actually exist in nature?
  • Capitol Hill Democrats, Republicans trade fire over National Guard in DC
  • Tesla now puts their robotaxi safety monitors in the driver’s seat
About

Welcome to Baynard Media, your trusted source for a diverse range of news and insights. We are committed to delivering timely, reliable, and thought-provoking content that keeps you informed
and inspired

Categories
  • Entertainment
  • Lifestyle
  • News
  • Sports
  • Tech
  • Travel
Facebook X (Twitter) Pinterest WhatsApp
  • Contact Us
  • About Us
  • Privacy Policy
  • Disclaimer
  • UNSUBSCRIBE
© 2025 copyrights reserved

Type above and press Enter to search. Press Esc to cancel.