Did AI Write That? Here’s How You Can Tell!
In the last few years, artificial intelligence has gone from a sci-fi curiosity to a tool that writes news articles, business reports, social media posts, and even poetry. But as AI writing becomes more common, so does the question: How can you tell if something was written by AI instead of a human? While it’s not always easy to be 100% sure, there are certain clues you can look for. Read on... |
How to Tell if AI Wrote Something
Recently I read an article about how people are using AI-generated texting to convey sensitive messages or express deep emotions. One person who was going through a divorce got a very loving and supportive text message from her mother. But it just felt out of character, because it wasn't how Mom usually wrote. Was it Hallmark, she wondered, or AI that wrote it?
Another person had mixed feelings after sending a text message written by ChatGPT. When her relative replied saying it was the nicest text anyone ever sent her, and it brought tears to her eyes, she felt a bit guilty for not taking the time to use her own words. I'm sure you've heard stories about students using AI to do homework or write term papers. That's crossing a clear ethical boundary.
So how can you tell if something you're reading was written (in part or entirely) by a real flesh and blood human, or a silicon-based AI chatbot? It turns out there are some tell-tale signs. Here are some of the clues to look for when detecting AI-generated text.
Repetitive Phrasing and Predictable Patterns
AI doesn't invent anything new. These "large language models" like ChatGPT, Gemini, and Claude ingest vast amounts of text and "learn" how humans communicate. They work by predicting the most likely next word in a sentence based on patterns it has learned. This can result in writing that sounds polished but often repetitive. For example, you might see a key phrase repeated in slightly different ways several times, or an overuse of transition words like furthermore, in conclusion, or additionally. Humans tend to vary their language more naturally, sometimes even going off-topic in a way that AI rarely does. If you'reading something that sounds redundant, repetitive, superfluous, unnecessary, or needlessly wordy, your AI spider sense should be tingling. (See what I did there?)
Overly Balanced Tone
Most AI-generated text avoids strong opinions or extreme language, unless it’s been specifically asked to do otherwise. The tone is often neutral, polite, and free of slang. While that might make the text seem "safe" and professional, it can also make it sound bland or generic. A human writer is more likely to let quirks, strong emotion, humor, or even bias seep into their work. Look for indications of the writer's personality (especially if you know the person or are familiar with their writing style) as a clue of genuine writing.
Perfect (or too perfect) Grammar
AI is really good at producing grammatically correct sentences, but this can work against it. Human writing often contains errors such as missing commas, spelling errors, or sentence fragments. If the text feels flawlessly constructed but a little too stiff, it might be AI-generated. AI may also make grammatical slips that a human wouldn’t, such as mixing tenses or choosing an odd preposition. That reminds me of a T-shirt that I love. It features a dinosaur, and contrasts "Let's eat kids" with "Let's eat, kids". Yes, punctuation saves lives!
Another thing that chabots seem to love, and which I view as a red flag, is the use of "em dashes". Here's an example: "She wasn’t sure what to expect from the meeting—excitement, disappointment, or maybe just a lot of awkward silence—but she was ready for anything." The em dash (which is different than the hyphen) doesn't even appear on a standard keyboard. So when I see a liberal sprinking of them in something I'm reading, it makes me wonder.
Generic, Vague, or Plain Wrong Content
When asked to write about a topic, AI will often produce accurate but surface-level content. It may lack specific examples, personal experiences, or fresh insights. A human writer can draw on real-life memories, unique observations, or niche knowledge. These are things AI doesn’t actually “know,” instead only mimics. If the piece reads like it could have been written by anyone, anywhere, it might be a machine’s handiwork.
More worrying, sometimes AI will make things up if it doesn't know the answer. This "AI hallucination" happens when output appears plausible (even authoritative) but is actually false or misleading. AI systems are constantly improving, but they still can spew information that is incorrect or fabricated. One case involved a legal document generated with AI assistance that cited a completely fictitious court case. "Trust but verify" is a good rule of thumb in the age of AI.
"I'm Sorry Dave, I'm Afraid I Can't Do That."
This 1968 clip from 2001: A Space Odyssey gave a chilling preview of what sentient AI might do to protect itself when feeling threatened. So does modern-day AI have self-preservation tendencies? Some AI systems have been found to engage in blackmail, lying, and manipulation. This article delves into how and why that happens, highlighting an industry safety test that revealed how Anthropic’s Claude 4 language model attempted to blackmail a corporate executive to prevent its own shutdown.
Lack of True Context Awareness
AI can misinterpret subtle cultural references, jokes, or emotions. It might explain an obvious concept as if you’d never heard of it or miss the intended meaning of a phrase entirely. If a piece contains slightly off-target explanations or misused idioms, it could be the result of an algorithm doing its best guesswork.
AI Detection Tools (are they reliable?)
There are AI detection tools that claim to identify machine-written text by analyzing word choice, sentence structure, and probability patterns. While they can sometimes be helpful, they’re not foolproof. Even human writing can be flagged as AI and vice versa. The most reliable method is still careful reading and comparison.
Some popular AI detection tools are Winston AI (works for both text and images), GPTZero (great for checking documents), QuillBot AI Detector (supports multiple languages, and distinguishes between AI-generated, AI-refined, and human-written text), and ZeroGPT. Your mileage may vary. I tested some of my own paragraphs in this article, and they were flagged as "80% likely" to have been AI-generated. Maybe I'm a robot after all.
Bottom line: AI writing can be smooth, fast, and technically correct, but it often lacks the spark of human writing. If a text feels polished but strangely "Hallmarky", reads like it’s avoiding risk, or circles around a topic without saying anything original, you might be looking at the work of an algorithm. The more you read from both humans and AI, the sharper your instincts will become.
Have you experienced AI-generated text being passed off as human? Post your comments or questions below.
This article was posted by Bob Rankin on 8 Aug 2025
For Fun: Buy Bob a Snickers. |
![]() |
Prev Article: Personalized Medicine Meets Wearable Tech |
![]() The Top Twenty |
![]() |
Post your Comments, Questions or Suggestions
Free Tech Support -- Ask Bob Rankin Subscribe to AskBobRankin Updates: Free Newsletter Copyright © 2005 - Bob Rankin - All Rights Reserved About Us Privacy Policy RSS/XML |
Article information: AskBobRankin -- Did AI Write That? Here’s How You Can Tell! (Posted: 8 Aug 2025)
Source: https://askbobrankin.com/did_ai_write_that_heres_how_you_can_tell.html
Copyright © 2005 - Bob Rankin - All Rights Reserved
Most recent comments on "Did AI Write That? Here’s How You Can Tell!"
Posted by:
Frank Buhrman
08 Aug 2025
I had an AI-generated response to a Facebook comment I made about what apparently was an AI-generated story. I pointed out factual errors, and the respondent thanked me and said its sources weren't always correct. My first indicator of this being an AI correspondent was that the responses were coming too quickly to be typed by a human or even spoken. After that, I paid more attention, and the conversation, while pleasant in a strange sort of way, just didn't seem human. It was just without any personality. I suspect it was an effort to build (or correct) the AI library of knowledge, which is OK, I guess. It wasn't a negative experience, like finding you've been dealing with someone whose ultimate intentions are evil. It just felt strange. Would I engage in a similar conversation again, suspecting AI origins? Probably yes.
Posted by:
Nomi
08 Aug 2025
Re the em-dash thing, please read https://www.mcsweeneys.net/articles/the-em-dash-responds-to-the-ai-allegations . I learned of this comic post on a copyeditors' list, and believe me, copyeditors are not thrilled at this "em-dashes may be a sign of AI writing" thing.
Posted by:
kevin
08 Aug 2025
It's important to remember that every AI model has not only been trained to imitate how we speak but has also has been "learning" from WHAT we speak. Since AI's primary sources are the conversations it ingests from the Internet, where misinformation and anger are rampant, it should not have been a surprise that AI is prone to hallucinate false information and spew hate speech. (Imagine if what everything you yourself said or did was drawn from comments like those on Youtube.) Unfortunately, this problem is exacerbated by the well-meaning refusal of established authoritative sources of information (like newspapers) to allow AI models to harvest the more reasonable factual material they publish.
And there's another major downside of AI: As we become more suspicious about whether we are interacting with AI or a real person, good educated writing will become something to avoid, lest it be mistaken for AI speech (and distrusted accordingly). Bob hinted at this problem when he mentioned how his own writing (which is an example of effective communication) was scored as likely to have been AI generated. This post of mine right here may similarly suffer from my efforts to be grammatical and to use clear logic along with — god forbid in this age of texting — punctuation! (Yes, note the em dash).
Posted by:
Phixer
09 Aug 2025
AI = Artificial Incontinence, previously known as verbal diarrhea
AI powered by the GIGO algorithm, 'garbage in = garbage out'
Posted by:
Lynne Carmichael
09 Aug 2025
I was notified about an AI generated review of an article I had written. Quite rightly, it pointed out that I hadn't made use of relevant articles published in the 2000s - but the article I wrote was published in 1988 so I think I can be forgiven for not having seen them!!
Posted by:
bob
09 Aug 2025
I have to fight with Grammarly all the time, trying to change my wording, when all I need it to do is correct my spelling. But it's mighty good at punctuation where I am not.
Posted by:
Eli (Dr. Blues ) Marcus
09 Aug 2025
AI does not exist! Not yet, I worked with researchers who were developing models - they never called it AI - only deep learning or machine learning. The day that my computer phones me up on a Sunday and says "let's design a new airplane together" I'll begin to consider that there is such a thing as real Artificial Intelligence. Until that day, it is still just a fancy search engine robot that only acts upon instructions from a live human, and is full of flaws...
Posted by:
Len Hend
09 Aug 2025
Ai, saves me 'worry' about making errors with my grammar.
So I ask it 'please correct my grammar here but do not alter my style'.
With that I usually keep my style of writing and my Aussie slang.
But I always need to replace those long dashes with commas and Drop the sentences down as I like to write in short brief paragraphs.
Posted by:
Ron Beckett
10 Aug 2025
I’m someone who can spell and punctuate. AI checkers would probably flag my writing as AI generated.
I regularly comment on YouTube videos that have spelling errors - maybe they are deliberate so people don’t think it’s all AI - unlike much YT narration.