The study that's making content teams nervous

Semrush published new research at the start of April 2026 that every marketing team running an AI content operation needs to read. They analysed 42,000 blog pages across 20,000 keywords and used an AI detector to classify each page as human-written, AI-assisted, or fully AI-generated.

The headline number: human-written content appeared in the number-one position 80% of the time. Purely AI-generated content appeared there 9% of the time.

That's an 8x gap at position one. And the gap widens the closer you get to the top of the page.

80%
Human content at #1
9%
AI content at #1
8x
Gap at position one
42k
Blog posts studied

The part most people aren't talking about

The same study surveyed 224 SEO professionals. 72% of them believe AI content ranks as well as or better than human-written content. The data says the opposite at the top of the page.

That's a striking disconnect. The people running AI content programmes are significantly more optimistic about performance than the ranking data justifies. There are two explanations. Either AI content is genuinely improving and the survey respondents are seeing it, or teams aren't rigorously comparing AI and human content performance in the same way Semrush did.

A second study from DigitalApplied tracked 4,200 articles over 16 months and found something more specific. Purely AI-generated content ranked an average of 23% lower than human-written content targeting the same keywords. But AI-drafted content with substantive human editing came within 4% of fully human writing.

The variable is not whether AI wrote the first draft. It's whether a human improved it before publishing.

🔑 Key finding

AI-assisted content (AI draft + human edit) performs within 4% of fully human-written content. Unedited AI content ranks 23% lower on average and acquires 61% fewer editorial backlinks over time.

Why the backlink gap matters more than the ranking gap

The ranking gap between AI and human content is real but manageable with good editorial process. The backlink gap is harder to fix. The DigitalApplied research found that AI-only content acquired 61% fewer editorial backlinks than human-written articles on comparable topics.

Backlinks remain one of the top ranking signals. And they compound — pages that earn links in months one through three continue to benefit from those links for years. Unedited AI content that fails to attract links doesn't just underperform now. It falls further behind over time.

The same study tracked a secondary finding: the ranking gap between AI and human content widened over the 16-month observation period. At month three, the average gap was 14%. By month 16, it had grown to 31%. Human content accumulates authority. Pure AI content tends to plateau and then decline after algorithm updates.

What this means for your AI content workflow

If you're using AI to write first drafts and then publishing without meaningful human review, this data is a direct warning. Not about AI content being penalised — Google has consistently said it evaluates quality, not production method. The warning is about what "quality" actually means in practice.

Quality signals that AI drafts consistently lack:

These are also exactly the signals Google's E-E-A-T framework (Experience, Expertise, Authoritativeness, Trustworthiness) is designed to reward. The reason human-written content outperforms AI at position one is not that it's human. It's that it tends to have these qualities, and unedited AI content tends not to.

The workflow that actually holds up

The Semrush and DigitalApplied studies point to the same conclusion: the human review step is not optional. But it also doesn't need to be a full rewrite. Here's what substantive human editing looks like in practice:

None of this takes an hour. Done well, it takes 15–20 minutes per article. That's the investment that bridges the 4% gap between AI-assisted and fully human content — and keeps you out of the 23% underperformance zone.

One more data point worth knowing

While Google still rewards human writing at the top of organic search, the SEO statistics tell a different story for AI-powered search. Research from Ahrefs found that Google's AI Overviews are more likely to cite AI-generated content than human-written content. Two different systems, two different incentive structures. What gets you to position one in organic search is not identical to what gets you cited in an AI Overview.

The practical implication: your content needs to serve both. Human-edited depth and expertise for traditional rankings. Structured, direct, clearly attributed answers for AI citation. The good news is these aren't in conflict — clear structure and authoritative sourcing improve both.

Frequently Asked Questions

Does Google penalise AI-generated content?
No. Google's stated position is that it evaluates content quality, not the production method. AI content is not penalised for being AI-generated. It tends to underperform because it often lacks the specific signals — original experience, direct perspective, editorial depth — that Google's quality algorithms reward. The fix is human editing, not abandoning AI tools.
How much human editing does AI content need to rank well?
Based on the DigitalApplied study, substantive editing brings AI-assisted content within 4% of fully human writing. "Substantive" means adding specific examples, sharpening the intro, removing generic filler, and adding a clear editorial point of view. In practice this takes 15–20 minutes per article. Publishing without any editing produces a 23% ranking disadvantage on average.
If AI content ranks worse, why is everyone using it?
Because the comparison isn't AI content vs. human content. It's AI-assisted content vs. no content at all, or vs. content published at a fraction of the volume. A team that publishes 20 AI-assisted articles per month with good editorial review will outperform a team publishing 3 human-only articles, even if the per-article quality is marginally lower. Volume and consistency compound over time in SEO.
What does the Semrush AI content study actually measure?
Semrush analysed 42,000 blog pages ranking in the top 10 for 20,000 keywords. They used GPTZero to classify content as human-written, AI-assisted, or fully AI-generated, then compared ranking position distributions. The study also surveyed 224 SEO professionals about their perceptions of AI content performance. The AI detector caveat: these tools are imperfect and can misclassify human and AI content, which may introduce some noise in the classification.
Is AI content better or worse for AI-powered search like ChatGPT and Perplexity?
Different dynamic. Ahrefs research found that Google AI Overviews are actually more likely to cite AI-generated content than human-written content. ChatGPT and Perplexity have different citation patterns again. The general principle across all AI search systems: structured content, clear attribution, factual accuracy, and direct answers tend to perform well regardless of how the content was produced.