Why run this experiment

Writing social posts is the task most marketers say they'll batch properly and never do. You're staring at a blinking cursor at 8am trying to be insightful about marketing automation before your second coffee. It's a grind.

So I handed it entirely to AI for 30 days. Every caption for X and LinkedIn went through Claude with a structured prompt — no manual rewrites. The goal was simple: could AI-generated social content perform well enough to justify removing this task from my plate entirely?

📋 Experiment Setup

Duration: 30 days (March 2026). Platforms: X and LinkedIn. Posts per platform: 2 per day, scheduled via Buffer. AI tool: Claude (claude.ai). Human input: brief + topic only — no editing of captions before posting. Audience size at start: small, under 500 followers on each.

The numbers

60
Posts published
14min
Avg daily time spent
+31%
Reach vs. previous month
4
Posts that flopped badly

The reach increase needs context: the previous month was sporadic posting. Consistency alone likely explains a significant portion of that number. Still, 14 minutes a day for 60 published posts is a meaningful time saving.

What performed best

Contrarian takes consistently outperformed everything else. Posts that challenged a common assumption — "AI tools don't save time, bad workflows do" — pulled 3–4x the engagement of informational posts. The AI generated these well when prompted with a specific counter-argument angle.

Short, sharp observations beat lists. The "→ arrow list" format underperformed vs. single-insight posts. This surprised me. The data suggests the algorithm rewards posts that feel like genuine thoughts rather than structured content.

LinkedIn outperformed X on reach for every format. Same content, meaningfully different distribution. LinkedIn's algorithm was kinder to new accounts with consistent posting.

What flopped

Four posts got near-zero reach. Three of them shared a pattern: they were factually correct, well-structured, and completely unmemorable. AI defaults to competent when it doesn't have a strong brief. "Here are 3 ways AI helps content marketers" is accurate and invisible.

The fourth bad post was an experiment result post where the AI generated a fake-feeling statistic without a clear source. I caught it before it went out — barely. That's the one process failure in 30 days. Manual review of any specific data claims is non-negotiable.

The prompting system that worked

Week one was rough. Generic prompts produced generic posts. By week two I had settled on a format that consistently produced usable output:

With that brief, hit rate was around 80% — meaning 4 in 5 posts were publishable without edits. Without it, hit rate was closer to 40%.

Would I recommend this workflow?

Yes, with one condition: invest 5 minutes in the brief, not zero. The difference between a good AI social post and a generic one is entirely determined by how specific your input is. "Write a LinkedIn post about AI" produces noise. "Write a LinkedIn post challenging the idea that AI tools save time, from the perspective of someone who tested 4 tools and wasted 3 weeks" produces something worth posting.

The 14 minutes a day is real. It includes brief writing, one review pass, and scheduling. That's a task that would have taken 45–60 minutes done manually with the same quality ceiling.

Frequently Asked Questions

Which AI tool works best for social post writing?
Claude performed well in this test for its ability to follow nuanced voice instructions. ChatGPT and Gemini are also viable. The tool matters less than the brief quality — a good prompt produces good output regardless of model.
Can you schedule AI-written posts directly via Buffer?
Yes. The workflow here was: write batch of 10 posts in Claude, copy into Buffer's scheduling interface, set to auto-schedule. Takes roughly 20 minutes for a full week of content on two platforms.
Does AI social content feel inauthentic?
It can — specifically when the post tries to simulate personal experience it doesn't have. The solution is to feed real data, real results, and real opinions into the prompt. AI is a writing engine; the insight still has to come from you.
What's the biggest risk of this workflow?
Fabricated statistics. AI will occasionally invent a plausible-sounding number. Always verify any specific data claim before posting. One bad stat posted publicly does more damage than a month of slightly generic content.
Is this approach suitable for brand accounts, not just personal?
Yes, but brand accounts need tighter brief templates with defined voice guidelines. Feed the AI your brand voice document at the start of each session. Consistency matters more for branded channels than personal ones.