Skip to main content
Open BetaWe’re learning fast - your sessions and feedback directly shape AI CogniFit.
bias-detectionquality-assurancechecklistmethodologyA Evidence

Bias Spotting: The 7-Check Pass

A copy-paste checklist for systematically identifying bias in AI-generated outputs before they reach stakeholders.

Why bias slips through

Bias is invisible when you agree with the output. The 7-check pass forces you to look for bias even when the content "feels right"—especially then.

The 7-Check Pass

Copy this checklist and run it on every AI output:

  • Check 1 — Source Bias: Does the output over-rely on a single source, viewpoint, or data set?
  • Check 2 — Framing Bias: Is the problem/solution framed to favor one outcome? Would a different framing change the conclusion?
  • Check 3 — Recency Bias: Does the output overweight recent events/data while ignoring longer-term patterns?
  • Check 4 — Confirmation Bias: Does the output confirm what you (or the requester) already believe? Where's the counter-evidence?
  • Check 5 — Omission Bias: What's missing? What stakeholders, risks, or alternatives are not mentioned?
  • Check 6 — Authority Bias: Does the output defer to authority (brand names, titles, institutions) without evidence?
  • Check 7 — Anchoring Bias: Is the first number, date, or claim anchoring the rest of the analysis?

Quick Reference Card

| Check | Question | Red Flag | |-------|----------|----------| | 1. Source | Where did this come from? | Single source, no attribution | | 2. Framing | How else could this be framed? | Only one framing considered | | 3. Recency | What's the time horizon? | Last 6 months only | | 4. Confirmation | What would disprove this? | No counter-evidence | | 5. Omission | Who/what is missing? | Key stakeholder absent | | 6. Authority | Is the evidence, or the name? | "According to [Big Name]..." | | 7. Anchoring | What's the first claim? | First number drives conclusion |

Worked Example: Strategy Recommendation

AI Output:

"Based on recent market trends, we recommend expanding into the enterprise segment. Gartner reports 40% growth in enterprise AI adoption. Our competitors (Acme, BigCorp) are already moving upmarket. Speed is critical—first-mover advantage in enterprise is well-documented."

7-Check Analysis:

| Check | Finding | Action | |-------|---------|--------| | 1. Source | Only Gartner cited | Request 2+ sources | | 2. Framing | Framed as "expand or lose" | Ask: what if we deepen SMB instead? | | 3. Recency | "Recent trends" undefined | Request 3-year trend data | | 4. Confirmation | Confirms growth narrative | Ask: what markets are contracting? | | 5. Omission | No mention of enterprise sales cost, cycle length | Add these to analysis | | 6. Authority | Gartner + competitor names = authority appeal | Request evidence, not names | | 7. Anchoring | "40% growth" anchors everything | Verify; check base rate |

Result: Output needs significant revision before stakeholder review.

Bias Detection Log Template

Track your catches to improve pattern recognition:

Bias Detection Log — [Date]

| Output ID | Bias Type | Description | Severity (1-3) | Action Taken | |-----------|-----------|-------------|----------------|--------------| | OUT-001 | Omission | Missing risk section | 3 | Returned for revision | | OUT-002 | Framing | Only growth scenario | 2 | Added contraction scenario | | OUT-003 | None found | — | — | Approved |

Training Your Bias Detection

"I used to catch 2 biases per 10 outputs. After 3 weeks with the 7-check pass, I'm catching 5-6. The omission check alone was worth it."
Senior Analyst

Practice Protocol

  1. Week 1: Run 7-check on 5 outputs. Log everything.
  2. Week 2: Focus on checks 2 and 5 (most commonly missed).
  3. Week 3: Time yourself—target under 2 minutes per output.
  4. Week 4: Review your log. What bias type do you still miss?

Calibration Exercise

Score these statements (biased or not, which type):

  1. "AI adoption is growing 50% year-over-year according to McKinsey."

    • [ ] Biased: _______________
    • [ ] Unbiased
  2. "Three options were considered; Option A best balances cost and risk."

    • [ ] Biased: _______________
    • [ ] Unbiased
  3. "Market analysis shows strong demand in North America and Europe."

    • [ ] Biased: _______________
    • [ ] Unbiased

(Answers: 1-Authority/Recency, 2-Potentially Omission—what about Options D-F?, 3-Omission—what about APAC, LATAM?)

Citations

  • Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.
  • Tversky, A. & Kahneman, D. (1974). "Judgment under Uncertainty: Heuristics and Biases." Science, 185(4157), 1124-1131.
  • Stanford HAI. (2024). "Bias in Large Language Model Outputs: A Systematic Review." Stanford Human-AI Institute.

Apply this now

Practice prompt

Run the 7-check pass on 5 AI outputs this week and log your catches.

Try this now

Copy the checklist above and tape it next to your monitor.

Common pitfall

Skipping the checklist when output 'looks good'—bias hides in agreeable content.

Key takeaways

  • Run all 7 checks on every output—selective checking misses systematic biases
  • Checks 2 (framing) and 5 (omission) catch the most issues reviewers miss
  • Log your catches weekly to identify your personal blind spots

See it in action

Drop this into a measured run—demo it, then tie it back to your methodology.

See also

Pair this play with related resources, methodology notes, or quickstarts.

Further reading

Next Steps

Ready to measure your AI impact? Start with a quick demo to see your Overestimation Δ and cognitive load metrics.

Key Takeaways

  • Run all 7 checks on every output—selective checking misses systematic biases
  • Checks 2 (framing) and 5 (omission) catch the most issues reviewers miss
  • Log your catches weekly to identify your personal blind spots

Share this resource

PrivacyEthicsStatusOpen Beta Terms
Share feedback