The experiment
Starting from zero — no existing audience, no domain authority, no social following — I built and launched BuzzRiding using AI as the primary content engine. Claude wrote the briefs, the articles, the SEO metadata, the social posts, and the newsletter copy. My role was to direct, review, and occasionally rewrite the intro.
This is the honest account of what that produced.
📋 Experiment Parameters
Duration: 30 days. Articles published: 8. AI tool: Claude (claude.ai). Human review time per article: 15–20 minutes. Total active time on content: approximately 3 hours over the month.
The results
The traffic number needs context. Zero organic traffic in month one is expected — not a failure. A new domain with zero authority takes 3–6 months to appear in Google results, regardless of content quality. The articles are indexed. The clock is running.
What the AI did well
Speed was extraordinary. From keyword research brief to published-ready article in under 45 minutes. For eight articles, that's approximately 6 hours of AI work compressed into 3 hours of my active time. No human writer works at that pace.
Consistency was better than expected. Keeping a consistent brand voice across 8 articles written in separate sessions is genuinely hard for a human. Claude maintained the BuzzRiding voice — friendly, data-led, jargon-free — across every piece with minimal correction.
SEO structure was solid out of the box. H2 structure, FAQ sections, meta descriptions — all done correctly on the first pass. No SEO specialist needed.
What the AI did badly
Intros were the weakest element. AI-generated article openings are recognisably formulaic. They state the problem, promise to answer it, and get to the point. That's correct structure — but it's also predictable. Every intro needed a human rewrite.
Specificity required prompting. Left to generate freely, Claude produces accurate but generic content. The quality gap between a generic prompt and a detailed brief with specific data points, examples, and angle guidance is enormous. The workflow works — but only if you invest in the brief.
No genuine experience. The articles are well-structured and informative. They don't have the lived-experience texture that the best content writing has. That's a real limitation — one that becomes more important as Google's E-E-A-T signals mature.
Would I do it again?
Yes. Without hesitation. The alternative — writing 8 articles manually in 30 days while building everything else from scratch — was not realistic. AI made the project possible.
The correct framing is not "AI vs. human writing". It's "AI-assisted writing vs. no content at all". At the zero-budget, solo-operator stage, the comparison is obvious.
The quality ceiling is real. As the brand matures and develops genuine audience data and real experiments to reference, the content will need to develop a thicker layer of authentic human experience. That's a month 3+ problem.