Help Center
How to Read Your Results
Summary → Review → Next Step: Make the learning loop explicit.
The three-tile learning loop
Every Analyzer run produces three outputs. Understanding what each one tells you—and what to do with it—turns raw data into actionable improvement.
1. Summary Tile
What you see at a glance
Score
Your performance on this run (0-100 scale). Higher is better, but context matters.
Overestimation Δ
Gap between your self-rating and reviewer score. Aim for |Δ| < 5.
micro-TLX
Mental demand + frustration captured immediately after the run. Target < 60.
Quick check
The Summary tile tells you whether this run was calibrated (low Δ) and sustainable (low TLX). If both are green, you're on track.
2. Review & Learn
What deeper analysis reveals
Prediction accuracy
How close your pre-review quality prediction was to actual score.
Miss patterns
Where your evaluation broke down—fluent-but-wrong, format-over-substance, etc.
Rubric alignment
Whether your criteria matched reviewer expectations.
Calibration moment
Review & Learn is where calibration happens. Spend 5 minutes here after every run to identify what you missed and why.
3. Next Step
What to do based on your results
Δ < 5, TLX < 40
You're calibrated and comfortable. Try a harder task or faster pace.
Δ 5-15, TLX < 60
You're drifting. Review the calibration guide and practice prediction.
Δ > 15 or TLX > 60
Stop and reset. Your confidence or workload is unsustainable.
Quickstarts & Resources
Based on your results, pick your next learning path.
The loop continues
After reviewing results, the loop continues: run again, capture new tiles, and trend your progress over time.
Next Steps
Ready to measure your AI impact? Start with a quick demo to see your Overestimation Δ and cognitive load metrics.