Why run this experiment
Writing social posts is the task most marketers say they'll batch properly and never do. You're staring at a blinking cursor at 8am trying to be insightful about marketing automation before your second coffee. It's a grind.
So I handed it entirely to AI for 30 days. Every caption for X and LinkedIn went through Claude with a structured prompt — no manual rewrites. The goal was simple: could AI-generated social content perform well enough to justify removing this task from my plate entirely?
📋 Experiment Setup
Duration: 30 days (March 2026). Platforms: X and LinkedIn. Posts per platform: 2 per day, scheduled via Buffer. AI tool: Claude (claude.ai). Human input: brief + topic only — no editing of captions before posting. Audience size at start: small, under 500 followers on each.
The numbers
The reach increase needs context: the previous month was sporadic posting. Consistency alone likely explains a significant portion of that number. Still, 14 minutes a day for 60 published posts is a meaningful time saving.
What performed best
Contrarian takes consistently outperformed everything else. Posts that challenged a common assumption — "AI tools don't save time, bad workflows do" — pulled 3–4x the engagement of informational posts. The AI generated these well when prompted with a specific counter-argument angle.
Short, sharp observations beat lists. The "→ arrow list" format underperformed vs. single-insight posts. This surprised me. The data suggests the algorithm rewards posts that feel like genuine thoughts rather than structured content.
LinkedIn outperformed X on reach for every format. Same content, meaningfully different distribution. LinkedIn's algorithm was kinder to new accounts with consistent posting.
What flopped
Four posts got near-zero reach. Three of them shared a pattern: they were factually correct, well-structured, and completely unmemorable. AI defaults to competent when it doesn't have a strong brief. "Here are 3 ways AI helps content marketers" is accurate and invisible.
The fourth bad post was an experiment result post where the AI generated a fake-feeling statistic without a clear source. I caught it before it went out — barely. That's the one process failure in 30 days. Manual review of any specific data claims is non-negotiable.
The prompting system that worked
Week one was rough. Generic prompts produced generic posts. By week two I had settled on a format that consistently produced usable output:
- Topic: One specific idea or finding (not a broad theme)
- Angle: Contrarian / surprising result / counter-intuitive
- Format: Insight post, under 200 characters
- Voice: Practitioner sharing what they tested, not brand announcement
With that brief, hit rate was around 80% — meaning 4 in 5 posts were publishable without edits. Without it, hit rate was closer to 40%.
Would I recommend this workflow?
Yes, with one condition: invest 5 minutes in the brief, not zero. The difference between a good AI social post and a generic one is entirely determined by how specific your input is. "Write a LinkedIn post about AI" produces noise. "Write a LinkedIn post challenging the idea that AI tools save time, from the perspective of someone who tested 4 tools and wasted 3 weeks" produces something worth posting.
The 14 minutes a day is real. It includes brief writing, one review pass, and scheduling. That's a task that would have taken 45–60 minutes done manually with the same quality ceiling.