Pillar POV
Evidence BNASA-TLX for Knowledge Work: Two Sliders, Real Decisions
Capture workload without slowing the team. Two sliders after every run, tied straight to Δ and demo tiles.
If your TLX process takes more than 15 seconds, nobody will log it. Compress it and wire it to your Analyzer evidence.
Executive TL;DR
- •TLX >65 predicts quality debt within 1–2 sprints; catch fatigue before burnout hits
- •15-second 2-slider capture (mental demand + frustration) achieves 90%+ adoption when embedded
- •Teams correlating TLX + Δ see 40% fewer rework cycles and faster skill escalation
Do this week: Embed the 2-slider TLX modal in your Analyzer demo and publish the interpretation guide
Why TLX matters even when teams “feel fine”
Fatigue does not show up in burndown charts. It shows up in reviewer rework, prompt drift, and subtle Δ spikes. TLX gives you the earliest warning. When mental demand or frustration climbs above ~65/100, expect quality debt within 1–2 sprints.
Make TLX native to the workflow
Embed the two sliders inside your Analyzer run. The modal already exists—just force the save before people exit the results screen.
Implementation checklist
- ✓Use the 2-item scale only (mental demand + frustration). Anything longer kills adoption.
- ✓Link every TLX entry to an Analyzer run ID so you can pivot by pack, persona, or reviewer.
- ✓Publish a tiny key (0–30 = Flow, 30–60 = Watch, 60+ = Intervention) on
/help/tlxand reference it in retros.
From TLX to action
- Capture: Two sliders in the Analyzer immediately after “Finish attempt.”
- Compare: Stack TLX against Δ. High TLX with low Δ = burnout. High TLX with high Δ = skill gaps.
- Coach: If mental demand stays above 70 for three runs, bake a breathing room step into the Fair Trial checklist.
““We started gating launches on TLX deltas >15. Suddenly, teams self-corrected before the VP review.””
Keep the loop closed
Share /help/tlx with every new team, and make the Analyzer demo mandatory in onboarding so people feel the sliders before their first real run. Every TLX entry should link back to a decision: did we pause, pair, or revise the pack?
Apply this now
Choose your next step to put these concepts into practice
Run Interactive Demo
Experience the evaluation flow with sample tasks and see Δ + TLX in action
PM Quickstart Guide
Product Manager's guide to measuring AI impact and building evidence
Want to understand the science? Review our methodology
Share this POV
Paste the highlights into your next exec memo or stand-up. Link back to this pillar so others can follow the full reasoning.
Next Steps
Ready to measure your AI impact? Start with a quick demo to see your Overestimation Δ and cognitive load metrics.