The telltale signs of AI-generated content
- Marina Pantcheva
- Oct 7
- 10 min read
Updated: Oct 9
WARNING! This article is of the once-seen-can’t-be-unseen kind. When you know the telltale signs, spotting AI slop becomes unavoidable. Reading this article is at your own risk.

Every now and then, I go over my LinkedIn and Substack feed with a special mission. A purely linguistic one. I am searching for authentic human writing expressed in clear and meaningful language.
Such writing is becoming increasingly rare. For every authentic email, post or blog produced, there are dozens more that feel bland and formulaic.
Generated by LLMs. Improved by LLMs. Post-edited by LLMs. Flattened by LLMs. Diluted by LLMs.
It’s not that using AI as a writing assistant is wrong. What is wrong is when unedited AI-generated text is published as-is. It is equivalent to publishing raw MT: a signal to the reader that “good enough” is acceptable. That the author couldn’t (or wouldn’t) take the time to bring back meaning and life to their text after AI has rolled over it. Or that the author, perhaps, did not even bother to write the text.
Most readers easily recognize the feel of AI-generated texts, but few can pinpoint what exactly is off. This is so because we detect AI through an accumulation of subtle cues that form statistically significant patterns. There is no single obvious marker.
Still, it is worth knowing what markers reveal that AI was used to generate or edit a text. In this piece, I list the most common ones with real examples and simple explanations. The examples mostly come from blogs, websites and professional networks. Other text styles, such as academic writing or Wikipedia articles, show overlapping but also different patterns, which I won’t cover here, as they are less relevant to the Localization Industry.
My aim is to help anyone working with AI-assisted writing recognize the weak patterns so that they can fix them.
How to spot AI language
So, how do you spot a piece of content that was written or heavily polished by AI? Here are the giveaway signs.
Identical sentence structure with monotonous rhythm
Human writing has a mix of short and long sentences, with variations in structure and tone (called burstiness). AI-generated text, on the contrary, follows a monotonous pattern. Every sentence has about the same length and a predictable cadence:
statement > statement > statement > rhetorical question > answer > statement > statement > statement > rhetorical question > answer…
In short, human writing is naturally inconsistent, while AI writing is unnaturally consistent.
Fronted focus and dramatic pauses
To break this monotony, newer AI models have been trained to use fronted focus. This means placing a single keyword or short phrase at the start as a rhetorical question, followed by a longer explanatory sentence.
The outcome? A burned-out localization team with no clear roadmap. Words? They shape perception but rarely capture intention. Translation? It's only one piece of a much larger puzzle.
This structure creates a dramatic pause between the one-word setup and the elaboration that follows. However, when overused, it quickly becomes an annoying stylistic crutch — a clear sign of AI-generated text. The result? Predictable rhythm, synthetic tone, and writing that feels more staged than sincere.
(Ok, guilty as charged. The last sentence was AI-generated. On purpose. I just wanted to provide another example. The em dash on the line above is entirely mine, though.)
The Rule of Three
The “Rule of Three” is a classic writing principle. It says that ideas presented in threes are more engaging, memorable, and impactful than those listed in other quantities. Well, you just saw how it works.
AI knows this rule a bit too well and applies it at the level of words, phrases, and even sentences. Over and over (and over) again.
Here come a few more examples:
A noun trio: Imagine a system that is learning from [your feedback] (1), [your tone] (2), [your choices] (3) and using that to make every next job smoother.
A trio of verbal phrases: […] an intelligent assistant that [lightens the load]1, [respects the translator’s voice]2, and [improves continuously]3 based on real-world feedback
Even a trio of clauses: […] observing how [raw data becomes metrics] (1), [metrics become meaning] (2), and [meaning becomes action] (3).
AI is so good at generating triplets that it can go fractal, embedding a new trio inside the third element of a larger trio.
It is not just about innovation. It is about [narrative] (1), [cultural adaptation] (2) and [the uneasy intersection of [AI] (i), [governance] (ii) and [perception] (iii)] (3).
Of course, sometimes things and ideas naturally come in three. But with LLMs every key point and idea is broken down into a trio, so it feels artificial and repetitive.
Contrastive focus: It’s not (about) X, it’s (about) Y
The last example introduces another favorite AI trick: contrastive focus. It’s ubiquitous, sometimes appearing as often as once per every 200 words (or maybe I’m just reading the wrong blogs?).
Typical examples:
It’s not about working harder, it’s about working smarter.
It’s not a rebrand. It’s a profound shift in value.
Translation isn't just about words — it's about connection, emotion, and understanding. (Here the Rule of Three in action, too.)
Linguists aren’t just being asked to post-edit AI output at a fraction of the word rate — they’re being systematically undervalued.
Quality isn’t just seen. It’s felt in every phrase, every detail, every interaction. (The Rule of Three applied, again.)
The issue with the last two examples isn’t just about contrastive focus being unnecessary, it’s about there being no real contrast in the first place.
If you’ve got lost reading the last sentence, don’t worry, you should be. Contrastive focus and negating a negation (‘not’ + ‘un-’) are a deadly cocktail for the brain’s Broca area. Hard to process and even harder to justify. Let’s paraphrase.
The problem with the last two examples is that the two parts of the contrastive focus construction (known as “frames”) aren't in real opposition, unlike the clear contrast between “working harder” versus “working smarter.”
Asking linguists to correct AI output for low payment and undervaluing them are pretty much the same thing. The two are definitely not in opposition. Similarly, saying that quality is seen and quality is felt is no real contradiction, as “seeing” and “feeling” are part of the same perceptual experience, not competing sensory inputs.
It’s not about saying what it’s not about. It’s about just saying it.
Contrastive focus is a powerful rhetorical device. Its purpose is to reframe the narrative. That is, to shift the reader’s attention from one idea [Frame 1] to another idea [Frame 2] that stands in contrast to [Frame 1].
For example, in the introduction I (not AI!) wrote:
It’s not that using AI as a writing assistant is wrong. What is wrong is when unedited AI-generated text is published as-is.
Here, I deliberately used contrastive focus to move the conversation away from the blanket claim that “All AI use is bad”, to a more specific point: “Publishing unedited AI-generated text is bad.” In this way, I reframed a general disapproval into a specific critique.
Comparative framing: Less like X, and more like Y
A close cousin to the contrastive focus construction is the so-called comparative framing. It takes the form of “less like X and more like Y”.
Here, X is the expected or conventional interpretation. Y is the reframed interpretation the author wants the reader to consider: an unconventional or a more nuanced one.
Examples:
Such new technology feels less like a breakthrough and more like a case study.
This feels less like US leading digital governance and more like a localized narrative experiment.
Contextual framing: As X happens, Y does Z
Speaking of framing, there’s another type AI loves: contextual framing. It begins with a clause introduced by “As”, which sets the context or circumstances, followed by a main clause that explains what happens under those circumstances.
As AI continues to reshape the world at breakneck speed, this week's development demonstrates impressive technological leaps and thoughtful consideration of the guardrails needed to guide AI's evolution.
As virtual assistants handle more of our daily tasks, human judgment becomes more valuable than ever.
“Because…” fragments
Another quick way to add drama and weight is with a “Because…” fragment.
Normally, "because" introduces a dependent clause that explains the reason for something happening in the main clause (e.g. “I took my umbrella because it was raining”). But in AI writing "Because..." often appears as a sentence fragment to emphasize a point rather than explain a cause.
Here are some examples (the first one shown with the preceding sentence for context):
This isn’t about who is better at content. It is about apparent algorithmic biases and what happens when visibility becomes selective. Because visibility equals opportunity.
Because efficiency without purpose is just speed in the wrong direction.
Because at the end of the day, the question is …
Because it’s easy to teach a machine language, but much harder to teach it empathy, memory, history. (and the Rule of Three at work here, too).
So, when you encounter a “Because…” fragment, don’t look for a cause-and-effect link. It’s simply a rhetorical device for highlighting insight.
False ranges
A false range uses the construction “from … to …” to suggest a continuum, but the from-point and the to-point do not lie on the same axis of meaning.
Take this example:
From legal frameworks to sustainability reports, from PR campaigns to guest experiences, we shaped content that speaks the language of exclusivity, precision, and elegance. (And yes, the Rule of Three appears here, too.)
At first glance, this sounds ok. But upon closer look, one starts wondering what lies between legal frameworks and sustainability reports. What connects PR campaigns and guest experiences?
The structure suggests some connection, but the endpoints are unrelated. In false ranges, the two endpoints do not define a meaningful continuum. So, the span between them is hollow.
Now contrast it with a true range:
From ideation to execution, we offer one platform for everything you need in marketing.
This works because the endpoints ideation and execution represent two ends of a real process. The activities between them (planning, drafting, launching) fall on a single axis of progression.
Abundant and strained metaphors
Another common issue in AI-generated text is the overuse of metaphors, especially those that are overly complex. AI-generated metaphors often feel so contrived that they burden the reader, who ends up navigating through a maze of broken meanings only to arrive in a hall of distortions where understanding is blinded by the glare of clever-sounding nonsense.
You see what I mean. Metaphors, while extremely powerful rhetorical devices, can quickly backfire when overused or carelessly applied.
Consider these examples:
Our company doesn’t just add dots to the map; it builds bridges across them.
As AI graduates from buzzword to bedrock, we're seeing a profound shift in responsibilities.
It’s like comparing the fuel efficiency of a car that conks out halfway, while ignoring the breakdown truck that has to tow you to your destination.
In short, AI-generated text typically has abundant metaphors, and those metaphors are often too strained.
Other signs of AI-generated language
Until now, I focused on the structural signs of AI-generated text. But there are other dimensions, too. I list them just briefly here and leave the more detailed discussion for the second article in this series.
Semantic: Vague or repetitive ideas, incoherence and poor connections between ideas, general lack of logical flow
Typographic: Overuse of Sentence Case, excessive bullet-pointing, boldfacing important phrases within running text
Lexical: inflated clichés and buzz-phrases, such as navigate the complex landscape, unlock/unleash limitless potential, embrace innovation, the rapid advancement of technology
Each of these dimensions deserves a deeper analysis, but for now, just being aware of them is a good first step.
The cumulative is the telltale
All of the rhetorical devices described above are valid stylistic tools. After all, AI learned them from texts written by humans. The presence of a metaphor or a “Because…” fragment on its own isn’t evidence that AI was involved. (And even if AI did assist in the writing, that alone isn’t problematic provided the author then diligently edited the text).
The problem is that AI overuses them. It may insert a metaphor into nearly every paragraph, apply the Rule of Three in every third sentence, and stack multiple sentences using the exact same structure in a row.
Take as, an example, the following text, where every sentence uses contrastive framing:
AI adoption comes slowly. Not as defeat, but as clarity. AI is powerful, but not perfect. Machines deliver output, but only humans bring meaning, nuance, and connection. The future isn’t “Us or AI.” It’s “Us with AI.” AI adoption is not the end. It’s the beginning.
Thus, AI-generated language reveals itself through the unnatural accumulation of several rhetorical devices, all repeated in a short space and often without a clear communicative need.
Here are more examples where the pattern becomes obvious:
Because when you look beneath the surface, AI is not just about innovation. It’s about narrative, cultural adaptation, and the uneasy intersection of technology, governance, and perception. Telltale signs: "Because"-fragment, contrastive focus, Rule of Three (used twice*)*
The point isn’t mastery in one sitting. It’s about lowering barriers, opening doors, and encouraging ‘fail fast, fail often’ dreamers to try, test, and create. Telltale signs: Contrastive focus, Rule of Three (used twice), metaphor saturation
As AI reshapes how Americans work, shop, and communicate, one generation isn’t just keeping pace — they’re pushing the limit. Telltale signs: Contextual framing, contrastive focus, Rule of Three (once)
Again, these rhetorical tools are not inherently bad. When used well, they enhance the writing. But when applied excessively and without nuance, they lead to formulaic and hollow language.
So, the markers of AI-generated language cannot be found in any single structure or phrase. What gives AI away is the repetition and overuse of the same words and patterns.
In short: the clue is in the cumulation.
Models change … and so do their favorite tricks
This article reflects the typical writing patterns of AI models as of late 2025. But these patterns change. For example, in early 2025, AI-generated text was full of the contrastive constructions “While X happens, Y happens.” Since then, their usage seems to have declined; though, it’s still common enough to serve as a valid clue.
This is why articles like this are by nature only valid temporarily. The subject they're analyzing is constantly evolving. The specific linguistic quirks of AI-generated text will change as models get retrained and new models appear. But what will remain constant is the difference between how humans and machines produce language. This difference will continue to create reliable telltale signs for identifying AI-generated content.
Conclusion & disclaimer
Every example in this article has been inspired and adapted from real published content. The goal is not to accuse the authors of producing AI slop. In fact, many of these example sentences may have been written by the human authors from scratch. And this is exactly the concern.
Today’s writers are increasingly subjected to the so-called seep-in effect: after prolonged exposure to AI-generated content, they begin to mimic its vocabulary, syntax and style. Often without even realizing it.
Over time, even skilled writers may start sounding artificial. Worse yet, writers are being stripped of once-effective tools, like contrastive focus, bolded emphasis, and the good old em dash. All these are now so overused in AI-generated text that humans risk sounding like AI if they use them.
How do we fix this?
That’s a topic for another article.