Skip to main content
Open BetaWe’re learning fast - your sessions and feedback directly shape AI CogniFit.

Quickstart · Innovation PM

Turn exec rituals into measurable Δ and TLX tiles.

Use this primer to run your first Analyzer pack as an innovation PM. It keeps discovery briefs, canvas reviews, and stakeholder narratives defensible within a week.

What this measures

This quickstart focuses on your core rituals: shaping bets, socializing launches, and recapping pilot outcomes. You will run the Innovation PM pack twice—manual then AI-assisted—to collect Overestimation Δ, TLX pulses, and reviewer-ready notes. The goal is proof, not theater: you will leave with tiles you can paste directly into your next exec readout.

Common pitfalls

  • Skipping a manual baseline. Without round-one data the Δ is meaningless.
  • Letting prompts drift mid-run. Lock the scaffolds before you compare.
  • Forgetting reviewer minutes. Execs want to know if “faster” also means rework.

Three task examples

  • Task 1: Venture brief refresh

    Rewrite a one-pager with and without AI and log where AI hallucinates or omits guardrails.

  • Task 2: Experiment canvas controls

    Capture Fair Trial levers (order, timebox, rubric). This exposes whether AI lift survives audit.

  • Task 3: Exec narrative rehearsal

    Summarize the pilot for the COO, then compare reviewer edits vs. AI edits.

Before you start

  • Lock the rubric + reviewer. Consistency beats speed.
  • Write down the scenario and prompt scaffolds once. No mid-run tweaks.
  • Plan to capture TLX immediately after each attempt (takes <15s).
  • Export tiles into your exec deck the same day so evidence stays fresh.
Task Frontier · Error cost × Tacitness
Tacitness ↑Error cost ↑Task frontierBrief updateSpec reviewExec narrative
System-1 ↔ System-2 attention shift
System 1System 2TLX spikes when context switching repeatedly

7-Step Evaluation Process

Follow our proven methodology for accurate AI evaluations

1

Manual Baseline

Complete task without AI assistance, log time and quality metrics

Learn more
2

AI-Assisted Run

Repeat the same task with AI tools, maintain consistent rubric

Learn more
3

Calculate Delta (Δ)

Measure the gap between expected and actual AI performance

Learn more
4

Assess TLX Workload

Evaluate cognitive load across six dimensions

Learn more
5

Review Minutes

Document quality issues, rework time, and reviewer notes

Learn more
6

Coach & Calibrate

Adjust expectations and refine approach based on data

Learn more
7

Publish Evidence Tiles

Share results with stakeholders using standardized format

Learn more

📅 Your First Week Plan

Day 1:Complete manual baseline (Step 1)
Day 2:Review key resources and prepare AI tools
Day 3:Run AI-assisted evaluation (Step 2)
Day 4-5:Calculate metrics and review results (Steps 3-5)
Day 6:Share tiles with team (Step 7)
Day 7:Team retro and plan next experiment

Ready for the first pack?

Bookmark this quickstart. After each Analyzer run, drop the TLX snapshot and Overestimation Δ tiles into your stand-up doc so progress stays visible.

PrivacyEthicsStatusOpen Beta Terms
Share feedback