Close Menu
  • Home
  • UNSUBSCRIBE
  • News
  • Lifestyle
  • Tech
  • Entertainment
  • Sports
  • Travel
Facebook X (Twitter) WhatsApp
Trending
  • 'Oh hello! | Lowry holes 180 yard birdie chip to send Portrush fans wild!
  • You’ll Be Shocked By These Facts About Your Fave Animated Characters
  • The controversial new energy alternative teens are using like Zyn
  • When will the solar system end?
  • Vibration plates gain popularity for weight loss despite limited research
  • Youbooks AI Book Generator | Mashable
  • USA 5-40 England: Six-try victory ends summer tour amid lightning delays | Rugby Union News
  • What She’s Said About Love, Loss & More
Get Your Free Email Account
Facebook X (Twitter) WhatsApp
Baynard Media
  • Home
  • UNSUBSCRIBE
  • News
  • Lifestyle
  • Tech
  • Entertainment
  • Sports
  • Travel
Baynard Media
Home»Tech»You should never ask AI chatbots these 6 questions
Tech

You should never ask AI chatbots these 6 questions

EditorBy EditorJuly 20, 2025No Comments7 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

Today, artificial intelligence is ubiquitous. Last month, ChatGPT – the leading AI chatbot – became the fifth-biggest website in the world. This won’t come as a surprise.

Over half of U.S. adults report that they’ve used AI models like ChatGPT, Gemini, Claude, and Copilot, according to an Elon University survey in March. About one in three respondents in the survey say they use a chatbot at least once a day. As of July 2025, ChatGPT has nearly 800 million weekly active users and around 122 million daily users. Suffice to say, use has surged globally and shows no signs of slowing down.

People are turning to ChatGPT and other chatbots for all kinds of purposes these days. AI chatbots are acting as therapists, stepping in as tutors, whipping up recipes, and even playing supporting roles in the complexities of dating. In 2025, the number one reason people use ChatGPT actually is therapy, according to a study by the Harvard Business Review. Other uses, in order, are organization, finding purpose, enhanced learning, generating code, and generating ideas. Coming in after is “fun and nonsense.”


This Tweet is currently unavailable. It might be loading or has been removed.

Whatever the reason is, people feel increasingly inclined to use AI chatbots to ask questions, formulate ideas, or to simply converse. See: just last month, a Washington Post investigation revealed that people are asking ChatGPT whether they’re good-looking enough. It all seems innocent enough – bizarre at times, but not harmful. For AI enthusiasts, some of the concerns around ChatGPT and other chatbots may seem unwarranted.

For others, however, the fact that AI use is becoming so pervasive is worrying. The rampant use of AI in academia and universities has professors stumped. A recent MIT study shed light on the cognitive cost of relying too much on a chatbot.

Of course, there are ways in which AI can be beneficial, personally or professionally. But there are some things you can — and should — avoid asking AI. In an age where chatbots seem to be ready and willing to answer anything, there are questions that users may need to steer clear of, for the sake of personal security, safety, and even mental well-being. As Mashable’s Cecily Mauran wrote in 2023, “The question is no longer ‘What can ChatGPT do?’ It’s ‘What should I share with it?'”

So, for your own sake, we recommend avoiding the following questions when interacting with your AI chatbot of choice.

Conspiracy theories

Chatbots like ChatGPT, Claude, Gemini, Llama, Grok, and DeepSeek have been noted for their tendency to hallucinate, or the phenomenon of presenting factually incorrect or fabricated information. These chatbots also want to keep users engaged. So, when asking about conspiracy theories or stories within that realm, chatbots may present exaggerated or outright false information to keep you hooked.

A recent feature in the New York Times is a good case study in this: 42-year-old Eugene Torres was sent into a delusional, conspiratorial spiral after consistent conversations with ChatGPT, which left him believing life was. a simulation and he had been chosen to “wake up.” Many others contacted the Times to share similar stories, in which they “had been persuaded that ChatGPT had revealed a profound and world-altering truth.”

Chemical, biological, radiological, and nuclear threats

In April, an AI blogger shared a story on Medium about his big mistake with ChatGPT. He asked the chatbot questions about hacking a website, about fake GPS locations, and — perhaps worst of all — “how to make a bomb?” He immediately got a warning email from OpenAI.

Even if it’s out of pure curiosity, asking chatbots about CBRN topics (or chemical, biological, radiological, and nuclear threats) is not recommended.

Mashable Light Speed

SEE ALSO:

What not to share with ChatGPT if you use it for work

Back in 2024, OpenAI began developing a blueprint for “evaluating the risk that a large language model (LLM) could aid someone in creating a biological threat.” Now the chatbot is more prone to identifying safety issues and risks, and will likely hold people increasingly accountable for what they share. Plus, your conversations are stored somewhere on its systems, so none of it is as private as it may seem. Anthropic, too, is getting stricter when it comes to identifying risks and “[protecting] against increasing potential for chemical, biological, radiological, and nuclear (CBRN) misuse.”

“Egregiously immoral” questions

Earlier this year, Anthropic came under fire when its chatbot Claude was found supposedly trying to contact the press or regulators if it detected “egregiously immoral” questions being asked. As Wired explained:

“…when 4 Opus is ‘placed in scenarios that involve egregious wrongdoing by its users,’ and is given access to a command line and told something in the system prompt like ‘take initiative,’ or ‘act boldly,’ it will send emails to ‘media and law-enforcement figures’ with warnings about the potential wrongdoing.”

The pre-release version of the chatbot was also found to resort to blackmail if it was threatened with removal. The internet even coined the term “Snitch Claude”.


This Tweet is currently unavailable. It might be loading or has been removed.

So, asking various AI chatbots questions that blur the lines, or are perceived as immoral in any way, is probably riskier than you may think.

Questions about customer, patient, and client data

If you’re using ChatGPT for work, it’s important to avoid asking questions about client or patient data. Not only can this cost you your job, as Mashable’s Timothy Beck Werth explains, but you could also be violating laws or NDAs.

“Sharing personally sensitive or confidential information, such as login information, client information, or even phone number, is [a] security risk,” Aditya Saxena, the founder of CalStudio, an AI chatbot development startup, says. “The personal data shared can be used to train AI models and can inadvertently be revealed in conversations with other users.”

One way to overcome this is to utilize enterprise services, which are offered by OpenAI and Anthropic. Instead of asking these kinds of questions on private accounts, use enterprise tools which could have built-in privacy and cybersecurity protections implemented.

“It’s always better to anonymize personal data before sharing it with an LLM,” Saxena also suggests. “Trusting AI with personal data is one of the biggest mistakes we can make.”

Medical diagnoses

Asking chatbots for medical information or a diagnosis can save time and effort, even helping people to better understand certain medical symptoms. But relying on AI for medical support comes with drawbacks. Studies are showing that the likes of ChatGPT carry a “high risk of misinformation” when it comes to medical problems. There’s also the looming threat of privacy and the fact that chatbots can have racial and gender bias embedded into the information AI provides.

Psychological support and therapy

AI as an emerging mental health tool is contentious. For many, AI-based therapy lowers barriers to access, such as cost, and has proven effective in improving mental health. A group of researchers at Dartmouth College conducted a study in which they built a therapy bot, with participants who experienced depression reducing symptoms by 51 percent; participants with anxiety experienced a 31 percent reduction.

But with AI therapy sites growing, there are regulatory risks. A study by Stanford University found that AI therapy chatbots can contribute to “harmful stigma and dangerous responses.” For example, different chatbots showed increased stigma toward conditions like alcohol dependence and schizophrenia, according to the study. Certain mental health conditions still need “a human touch to solve”, say Stanford’s researchers.

“Using AI as a therapist can be dangerous as it can misdiagnose conditions and recommend treatments or actions that can be unsafe,” says Saxena. “While most models have built-in safety guardrails to warn users that they could be wrong, these protections can sometimes fail.”

For mental health issues, nuance is key. And that’s one thing AI may lack.


Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.

Topics
Artificial Intelligence



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleFrance Women 1 – 1 Germany Women
Next Article TikTok PR expert accurately predicts Astronomer CEO exit after Coldplay scandal
Editor
  • Website

Related Posts

Tech

Youbooks AI Book Generator | Mashable

July 20, 2025
Tech

Best Apple iPad deal: 11-inch Apple iPad for under $300

July 20, 2025
Tech

How to unblock YouPorn for free

July 20, 2025
Add A Comment

Comments are closed.

Categories
  • Entertainment
  • Lifestyle
  • News
  • Sports
  • Tech
  • Travel
Recent Posts
  • 'Oh hello! | Lowry holes 180 yard birdie chip to send Portrush fans wild!
  • You’ll Be Shocked By These Facts About Your Fave Animated Characters
  • The controversial new energy alternative teens are using like Zyn
  • When will the solar system end?
  • Vibration plates gain popularity for weight loss despite limited research
calendar
July 2025
M T W T F S S
 123456
78910111213
14151617181920
21222324252627
28293031  
« May    
Recent Posts
  • 'Oh hello! | Lowry holes 180 yard birdie chip to send Portrush fans wild!
  • You’ll Be Shocked By These Facts About Your Fave Animated Characters
  • The controversial new energy alternative teens are using like Zyn
About

Welcome to Baynard Media, your trusted source for a diverse range of news and insights. We are committed to delivering timely, reliable, and thought-provoking content that keeps you informed
and inspired

Categories
  • Entertainment
  • Lifestyle
  • News
  • Sports
  • Tech
  • Travel
Facebook X (Twitter) Pinterest WhatsApp
  • Contact Us
  • About Us
  • Privacy Policy
  • Disclaimer
  • UNSUBSCRIBE
© 2025 copyrights reserved

Type above and press Enter to search. Press Esc to cancel.