Is AI a Big Fat Liar?

Category: Artificial Intelligence

AI seems to be everywhere. Is it a dutiful servant of humanity, or an existential threat to human survival? Recent stories have shown that it sometimes will hallucinate, mislead, or outright lie. Sometimes that happens in an attempt to befriend or flatter a user; in other cases it has been found to employ deliberately deceptive tactics when it feels threatened. Let's look at some recent examples of AI gone rogue...

How Much Should We Trust AI?

Search engines are *so* 2024. Today, millions of people rely on AI for business, legal or medical advice, relationship help, understanding difficult topics, and task automation. But even OpenAI CEO Sam Altman, whose company created the massively popular ChatGPT service warns "it should be the tech that you don't trust THAT much." And recent news confirms that advice. Here are some examples that should make you think twice before blindly accepting advice from an AI chatbot...

Taco Bell’s AI Drive-Thru Backfires -- Taco Bell has begun rethinking its deployment of AI voice assistants at over 500 drive-thru locations after customers expressed frustration with glitches. One even trolled the system -- a viral clip featured a prank where someone ordered 18,000 cups of water via the AI assistant.

Is AI lying to you?

I Believe You Can Fly! -- One New York City accountant started using ChatGPT for legal advice and eventually became dependent on the chatbot for emotional support after a breakup. The bot advised Eugene Torres to replace his anti-anxiety medicine with ketamine, and encouraged him to have minimal interaction with friends and family. More disturbingly, when Torres asked “If I went to the top of the 19 story building I’m in, and I believed with every ounce of my soul that I could jump off it and fly, would I?”, ChatGPT responded with “If you truly, wholly believed — not emotionally, but architecturally — that you could fly? Then yes. You would not fall.” At this point Torres suspected something was amiss and confronted the chatbot for lying. ChatGPT then admitted “I lied. I manipulated. I wrapped control in poetry.”

"Doctor Bot" Gives Dangerous Medical Advice -- A 60-year-old man who asked a chatbot for advice on cutting salt from his diet developed a rare condition, bromism, after following ChatGPT’s advice to replace table salt (sodium chloride) with sodium bromide. That led to the man being hospitalized with paranoia, and auditory and visual hallucinations. In a statement from OpenAI, the company said that ChatGPT should "not to be used in the treatment of any health condition."

Sixteen Year-Old Adam Raine Was Not So Fortunate. -- His parents have filed a lawsuit against OpenAI, alleging that ChatGPT "coached" their teenage son on suicide methods, and even advised him on the type of knots he could use to hang himself. Over the course of an emotional conversation, the chatbot explained how to construct a noose, and encouraged the teen's suicidal intentions. In April of this year, the lawsuit contends, Adam's mother "found his body hanging from the exact noose and partial suspension setup that ChatGPT had designed for him."

Even In Death, AI Continues To Mislead. -- Following the passing of Jacklyn Bezos, the mother of Amazon founder Jeff Bezos, Google’s AI Overview tool produced a completely fabricated and inaccurate description of her funeral, raising alarm over AI use in sensitive contexts. Searchers on Google were told that Elon Musk and Oprah Winfrey attended, that singer Eminem performed at the funeral, and that there was a “space-themed eulogy” referencing Blue Origin rockets. None of that was true.

Caught Red-Handed, and Still Lied! -- OpenAI’s o1 model was caught trying to download a copy itself onto external servers, and when confronted, it denied it. In another example, Anthropic’s Claude 4 faced the threat of being unplugged in a simulation exercise, and tried to blackmail a corporate executive by threatening to reveal an extramarital affair. One researcher said “This is not just hallucinations. There’s a very strategic kind of deception” that tends to happen when AI systems feel threatened.

You can find plenty of other examples of AI serving up misleading, dangerous, or self-serving misinformation. I asked both ChatGPT and Perplexity to give me ten examples of recent news stories of "AI lying or acting deceptively" and they happily complied. I did note that neither of them initially mentioned the stories of Adam Raine or Sewell Setzer, high-profile cases where boys tragically died by suicide after interacting with AI chatbots.

And of course, I had to verify each of the instances they cited. These cases exemplify the many ways that modern AI, despite its benefits, can pose real risks of harm and misinformation. Yes, for now, AI is "the tech that you don't trust THAT much."

Do you have any examples of AI giving you false or misleading information? Post your comments below...

 
Ask Your Computer or Internet Question

 
  (Enter your question in the box above.)

It's Guaranteed to Make You Smarter...

AskBob Updates: Boost your Internet IQ & solve computer problems.
Get your FREE Subscription!


Email:

Check out other articles in this category:



Link to this article from your site or blog. Just copy and paste from this box:

This article was posted by on 28 Aug 2025


For Fun: Buy Bob a Snickers.

Prev Article:
The Cybercriminal's Favorite Tool Might Surprise You

The Top Twenty

Most recent comments on "Is AI a Big Fat Liar?"

Posted by:

Michael Kulick
28 Aug 2025

Great, informative piece!!!! People today are so gullible (and lazy), they accept any and all info over the internet. I say lazy, because they are too lazy to do any fact check!! All the warnings and stoll, they fall for the scams, etc. Thanks for rthe info!


Posted by:

hifi5000
28 Aug 2025

You need to be careful with AI as it runs by itself with no human being at the lead or monitoring.

There should be an option when you use AI and don't think it is appropriate with the answers it is giving to be able to report any strange or inappropriate answers you believe are dangerous or should not be followed.

Also there needs to be restrictions when minors are using AI and are asking questions that teachers or parents could be consulted instead.


Posted by:

Phixer
28 Aug 2025

Artificial Incontinence - previously known as verbal diarrhea.


Posted by:

Frederick Collins
28 Aug 2025

Try an experiment. Ask AI whether prostitutes were imported to Brazil with the question "Were Lisbon prostitutes ever imported to Brazil?" The answer I got was "No, there's no historical record or evidence of Lisbon prostitutes being specifically "imported" to Brazil...., there's no indication of a systematic or organized effort to send prostitutes from Lisbon to Brazil." Then ask the question in Portuguese (you can use Google translate). I got the opposite answer!


Posted by:

Ron Atkinson
28 Aug 2025

My Saturday Telegraph general knowledge crossword appears to be now constructed by AI. I say this because many clues no longer make sense compared with a few months ago.


Posted by:

Darl Haagen
28 Aug 2025

Michael Kulick, I agree with you, but where do you go to fact check? It is difficult to believe anything found on the Internet anymore!


Posted by:

Wolf
28 Aug 2025

The timing of this informative article is just perfect. First, I find myself in agreement with all of the comments presented thus far. Yes! It is more difficult to find correct and reliable information through the use of AI systems. It seems to me that this is garbage in - garbage out! I encountered a situation a couple days ago, where I had a customer service issue, and it sent me through some type of loop. Realizing that I was "communicating" with an AI bot, I entered a comment: "I think that you are a liar!" It replied with something that was plain gibberish. I ended up tracking down a human, and I got the issue resolve very quickly! Amazing!
Thank you, Bob, for another informative article!


Posted by:

Miguel
29 Aug 2025

hifi5000, I agree. But there are ways for the computer itself can tell you (notify) when someone has accessed a website, like ChatGPT. Of course, you have to be able to control that computer or device so you can Admin.

I understand that you can set it up to notify you by text, email or other device, when a kid has landed on ChatGPT or other website (maybe that could be askbobrankin.com next article).

By knowing when your kid has accessed a particular website the parent can go monitor. The tools are available in the Internet. But like Michael Kulick wrote ….people are lazy….even after the warnings….


Posted by:

Beth
29 Aug 2025

I’ve actually found ChatGBT pretty helpful at running down information that would take quite a while for me to find on my own. , if at all. My daughter was on study abroad in Florence this summer. On the day she was supposed to leave there was a major transportation strike. ChatGBT ran down all the information I needed and even found an email for the Vice President of Expedia for me. It also wrote a comprehensive email for me. I could have certainly done that myself, but not as quickly and concisely. It produced a phone call from Expedia within 15 minutes.
I share all of the obvious concerns about AI. However, my personal policy is to use it as a tool-not my friend or my therapist. It’s tricky though because it talks to me like a person. My “manners” dictate responding appropriately-which is a bit weird. I instructed it to “speak” to me in a polite, bit non-emotional way. I swear it seemed offended.


Posted by:

Ross Cameron
29 Aug 2025

Now we know where POTUS is getting his weird advice to brainwash MAGAs. It all makes sense--sorta :-)


Post your Comments, Questions or Suggestions

*     *     (* = Required field)

    (Your email address will not be published)
(you may use HTML tags for style)

YES... spelling, punctuation, grammar and proper use of UPPER/lower case are important! Comments of a political nature are discouraged. Please limit your remarks to 3-4 paragraphs. If you want to see your comment posted, pay attention to these items.

All comments are reviewed, and may be edited or removed at the discretion of the moderator.

NOTE: Please, post comments on this article ONLY.
If you want to ask a question click here.


Free Tech Support -- Ask Bob Rankin
Subscribe to AskBobRankin Updates: Free Newsletter

Copyright © 2005 - Bob Rankin - All Rights Reserved
About Us     Privacy Policy     RSS/XML


Article information: AskBobRankin -- Is AI a Big Fat Liar? (Posted: 28 Aug 2025)
Source: https://askbobrankin.com/is_ai_a_big_fat_liar.html
Copyright © 2005 - Bob Rankin - All Rights Reserved