Schema Markup Example

AI Writing Tics That Kill Engagement: Data-Driven Analysis of 1,000+ Pages in 2026

New research analyzing 1,000+ pages reveals which AI writing habits actually hurt reader engagement—and which ones don't matter at all. We analyzed 10 domains and thousands of content pages to identify exactly which "AI tells" reduce engagement. Some surprises inside.
Published on
March 4, 2026

The AI Writing Problem That Isn't What You Think

Scroll through any marketing LinkedIn feed and you'll see marketers confidently declaring that certain phrases signal AI-written content. "In this article..." Bad. Em dashes—terrible. "Not only... but also"—unforgivable.

The problem? Most of these declarations confuse stylistic opinion with actual performance data. What counts as "bad writing" is subjective. But what reduces reader engagement? That's measurable.

This analysis examined 1,000+ content marketing pages across 10 different domains to identify which AI writing patterns actually correlate with lower engagement—and which ones are just stylistic hot takes with no real impact on performance.

How We Built This AI Writing Analysis Study

To separate fact from opinion, we built a comprehensive dataset of AI writing patterns and their correlation with reader engagement.

Our dataset included:

• 10 domains of varying sizes and monthly traffic
• Industries: tech, ecommerce, healthcare, education, analytics, and more
• 1,000+ content marketing pages
• Content types: fully human-written, human + AI collaborative, and fully AI-generated

We standardized the data by:

• Converting all writing tics to occurrences per 1,000 words (longer articles naturally contain more of everything)
• Excluding pages under 500 words (too short to show stylistic patterns)
• Using engagement rate as the primary metric (GA4's "engaged session" = 10+ seconds of user interaction)

Why engagement rate matters:
Engagement rate captures the reader's first real decision: "Does this content feel worth reading?" It's long enough for someone to skim an introduction, notice awkward patterns, and decide whether to continue. It's the real-world equivalent of first-impression quality.

The AI Writing Tics We Analyzed

We tracked 5 specific patterns commonly cited as "AI tells":"

1. "Not only... but also" constructions
Pattern: "Not only does X do Y, but it also does Z."
Frequency in dataset: Very high in AI-generated content

2. Sentence starts with "then," "this," or "that"
Pattern: "Then you should..." "This means..." "That shows..."
Frequency: Extremely common in AI-generated content

3. Introductory filler phrases
Pattern: "In this article," "We'll explore," "Let's take a look"
Frequency: Present in 80%+ of AI-generated content

4. "Conclusion" starters
Pattern: "In conclusion," "To conclude," "Finally"
Frequency: Common in structured AI-generated content

5. Em dashes
Pattern: Using em dashes for punctuation and emphasis
Frequency: Highest prevalence in entire dataset

The Biggest Surprise: Em Dashes Slightly Improve Engagement

When we analyzed the raw data, em dashes dominated every analysis. They're everywhere in the dataset. But here's the shocker: em dashes correlated positively with engagement rate.

Key findings on em dashes:

• Despite being labeled "AI artifacts," em dashes showed slight positive correlation with engagement
• This challenges the widespread assumption that em dashes = bad writing
• Likely explanation: Writers who use em dashes tend to write more nuanced, explanatory sentences
• These longer, more thoughtful sentences appear in higher-quality content that readers actually engage with

The lesson: Em dashes aren't causing engagement. But they're not the content killer everyone claims either.

The Phrase That Actually Hurt Engagement: "Not Only... But Also"

Among all the AI tics analyzed, "not only... but also" constructions showed the strongest negative correlation with engagement.

What we found:

• Negative correlation: -0.15 (statistically significant)
• When used occasionally, these constructions can add emphasis
• When used repeatedly, they trigger reader bounces
• One page in our dataset used "not only" and "but also" 12 times in a single post
• Readers notice the repetition immediately and it reads as mechanical

Why this pattern fails:
The construction feels forced and artificial when repeated. "Not only does this technology improve speed, but it also reduces costs. Not only is it affordable, but it also increases efficiency. Not only does it work better, but it also saves time." After 3-4 repetitions, readers start to notice the pattern and the writing feels robotic.

Starting Sections With "Conclusion" Is the Red Flag

The single strongest negative signal in the entire dataset: headers that begin with "Conclusion."

The data:

• Negative correlation: -0.118 (the strongest in the study)
• This is specifically about explicit "Conclusion" headers before CTAs
• Typically appears at the end of pages
• Suggests readers scroll to the bottom quickly and bounce

Why this matters:
Explicit conclusion sections telegraph the end of content. Readers who scroll directly to the "Conclusion" section are already planning to leave. The header itself signals "we're about to wrap this up." It's a stylistic tell that content is structured mechanically rather than providing continuous value.

The Tics That Don't Actually Matter

Many commonly cited AI writing patterns showed no meaningful correlation with engagement:

"In this article," "Let's explore," "In today's digital landscape"
• No significant correlation with engagement
• Readers don't seem to punish these phrases
• They're common enough in human writing too

Sentence starters like "This" and "Then"
• Minimal correlation with engagement
• Present in both high and low-performing content

Em dashes (as discussed)
• Slight positive correlation
• Not the content killer they're made out to be

The Control That Changed Everything: Shakespeare

To verify whether our dataset was capturing actual "AI tics" or just common English patterns, we ran the same analysis on two control samples:

A published novel from 2021 (definitely human-written):
Scored 6.9 AI tics per 1,000 words

Shakespeare's "Hamlet":
Scored 11.4 AI tics per 1,000 words

The conclusion: "AI tics" are just common English prose patterns. Shakespeare scores higher on "AI markers" than most AI-generated blog posts. If Shakespeare had access to ChatGPT, people would accuse him of being too reliant on AI.

What This Means for Your Content Strategy

1. Don't Panic About AI Stylistic Patterns

Most phrases people criticize as "AI tells" don't correlate with reduced engagement. "In this article" isn't hurting your content. "Let's take a look" isn't sabotaging your strategy. Don't rewrite content based on someone's Twitter thread declaring certain phrases "AI markers."

2. Watch Out for Repetitive Constructions

The real problem isn't individual phrases. It's overuse of specific patterns. "Not only... but also" used once? Fine. Used 12 times? Readers will notice and bounce. Similarly, "then," "this," and "that" as sentence starters are fine individually but become obvious when repeated.

3. Avoid Explicit "Conclusion" Sections

The strongest negative signal we found was headers explicitly labeled "Conclusion." Instead:

• Blend conclusions into analysis
• Use subtle transitions
• Add new value before wrapping up
• Don't signal the end with explicit headers

4. Use Em Dashes Appropriately

Em dashes correlate with better engagement. If your writing style calls for them, use them. They tend to appear in more nuanced, thoughtful sentences—which is exactly what performs well.

The Bigger Picture: AI Writing in 2026

The obsession with "AI tics" misses the real point. Good writing is good writing, regardless of whether AI assisted in creating it. Bad writing is bad writing, AI or human.

What actually matters:

• Value to the reader (does this answer their question?)
• Clarity (can they quickly understand the point?)
• Originality (does this offer something new?)
• Structure (can they find what they need?)
• Honesty (is this authentic and trustworthy?)

These factors matter far more than whether you used "In this article" in the opening.

Questions About AI Writing and Engagement

Q: Should I remove all em dashes from my content?
A: No. According to this data, em dashes correlate slightly positively with engagement. Use them if they fit your style.

Q: Are certain AI writing patterns actually bad?
A: Only two patterns showed clear negative correlation with engagement: excessive "not only... but also" constructions (overused, reads mechanical) and explicit "Conclusion" headers (signals end of content). Everything else is mostly style preference.

Q: Does Google penalize content for using AI?
A: No. Google doesn't have a penalty for AI-written content. Google cares about usefulness, relevance, and authority. The writing style doesn't matter.

Q: What's the real difference between AI-written and human-written content?
A: The best AI-assisted content is edited and customized heavily. The best human content is original and well-researched. What matters is whether the final content is valuable and honest, not who/what created the first draft.

The Real Lesson: Write for Readers, Not Search Engines (Or Twitter Critics)

Content marketers have spent the last year obsessing over whether their content "looks AI-generated." This research suggests that's mostly theater. Most phrases people point to as "AI tells" don't meaningfully impact reader engagement.

What does impact engagement: usefulness, clarity, originality, and honesty. These are evergreen principles that have nothing to do with whether AI assisted in the writing process.

Write content that serves your reader. Avoid lazy, repetitive patterns (whether AI or human-generated). And stop panicking every time someone on the internet declares that em dashes are the death of content marketing.

Because according to the data, em dashes are actually fine.