The speed illusion
AI generates output instantly. But generation time isn't total time. If you spend 30 minutes evaluating a 2-minute AI output, you've lost 28 minutes. Know when AI adds friction, not flow.
Three Signs AI Is Slowing You Down
Sign 1: TLX Spikes During Evaluation
What it looks like: AI generates output quickly, but you feel exhausted reviewing it. Mental demand or frustration hit 60+.
Why it happens: AI output requires more cognitive effort to verify than to create manually. You're doing System-2 evaluation on System-1 generation.
What to do:
- Track TLX per task type
- If evaluation TLX consistently high, try manual next time
- Compare total time (generation + evaluation) vs manual time
Sign 2: Iteration Loops
What it looks like: Prompt → output → not quite right → refined prompt → output → still not right → another prompt...
Why it happens: AI lacks context you have. Each iteration is you teaching it what you already know.
What to do:
- Count iterations per task
- If more than 3 iterations: stop, do it manually
- Log: "What context did AI lack?"
Sign 3: Reviewer Rejections
What it looks like: You submit AI-assisted work. Reviewer sends it back. You fix it. More feedback. Repeat.
Why it happens: AI optimizes for your prompt, not your reviewer's standards. The gap costs time.
What to do:
- Track reviewer feedback cycles per output type
- If AI outputs get more feedback than manual: investigate
- Consider: Does reviewer know it's AI-assisted?
- ✓TLX during evaluation: Is it higher than the task warrants?
- ✓Iteration count: More than 3 prompts for one output?
- ✓Reviewer feedback: More cycles than manual work?
- ✓Total time: Generation + evaluation + iteration + review
- ✓Honest comparison: Would manual have been faster?
Tasks Where Manual Often Wins
Institutional Knowledge Required
AI doesn't know your company's history, politics, or unwritten rules. If the task requires "how we do things here," manual is faster.
Examples:
- Stakeholder communication with relationship context
- Process documentation for internal systems
- Decisions requiring organizational history
Nuanced Judgment Required
AI provides generic best practices. If the task requires "it depends" thinking, manual lets you skip to the answer.
Examples:
- Exception handling for edge cases
- Prioritization with political constraints
- Feedback that requires reading between lines
Output Shorter Than Prompt
If explaining the task to AI takes longer than doing it yourself, skip AI.
Examples:
- Quick email replies
- Simple code fixes
- Meeting notes for meetings you attended
Reducing AI Friction
If you want to keep using AI but reduce slowdown:
Reduce Options
Ask for 2 options, not 10. Decision fatigue from too many AI suggestions is real.
Provide More Context
Front-load context in your prompt. Less iteration = less time.
Accept "Good Enough"
Perfectionism on AI output wastes time. If it's 80% there, finish manually.
Assign to Expert Reviewer
For complex outputs, send directly to expert reviewer instead of self-reviewing.
“"I tracked my AI time for a month. Half my tasks were faster manual. Now I use AI selectively—and I'm more productive overall."”
The Honest Tracking Framework
For one week, log this for every task:
| Task | AI Used? | Generation Time | Evaluation Time | Iterations | Reviewer Cycles | Total Time | Manual Estimate | |------|----------|-----------------|-----------------|------------|-----------------|------------|-----------------|
At week end: Which tasks had AI total time > manual estimate?
Related Resources
- Productivity Pack — measure AI impact systematically
- Micro-TLX Guide — track cognitive load
Apply this now
Practice prompt
Track total AI time vs. estimated manual time for your next three tasks.
Try this now
Run the Productivity Pack and honestly compare AI vs. manual for each task type.
Common pitfall
Counting only generation time—evaluation and iteration time is where slowdown hides.
Key takeaways
- •Track total time (generation + evaluation + iteration + review), not just generation
- •Three warning signs: TLX spikes, iteration loops, reviewer rejections
- •Some tasks are faster manual—institutional knowledge, nuanced judgment, short outputs
See it in action
Drop this into a measured run—demo it, then tie it back to your methodology.
See also
Pair this play with related resources, methodology notes, or quickstarts.
Further reading
Next Steps
Ready to measure your AI impact? Start with a quick demo to see your Overestimation Δ and cognitive load metrics.