Verify claims in backlog items, skill documentation, or plugin content against primary sources using web lookups. Spawns parallel verification agents that MUST use WebFetch/WebSearch/gh — training data recall is explicitly rejected as evidence. Produces VERIFIED/REFUTED/INCONCLUSIVE verdicts with citations. Triggers on "fact check", "verify claims", "check against primary sources", or when backlog items are marked UNVERIFIED.
Install with Tessl CLI
npx tessl i github:Jamie-BitFlight/claude_skills --skill fact-check99
Does it follow best practices?
Validation for skill structure
Verify factual claims against primary sources using web lookups. Training data recall is NOT evidence.
UNVERIFIED/find-cause instead/research-curator instead<evidence_rules>
Adapted from the find-cause evidence chain protocol.
Valid evidence (one of these MUST support each verdict):
npx <tool> --help, gh api, or similar CLI outputgh or clonemcp__Ref, mcp__exa results with URLsNOT valid evidence (these MUST NOT support verdicts):
</evidence_rules>
Parse the input to extract discrete, falsifiable claims:
flowchart TD
Start([Parse input]) --> Q{Input type?}
Q -->|Backlog item title| ReadBacklog[Run backlog list --format json, find matching item]
Q -->|Plugin path| ScanFiles[Scan SKILL.md and references for factual claims]
Q -->|--all-unverified| FindAll[Run backlog list --format json, filter UNVERIFIED items]
ReadBacklog --> Extract[Extract falsifiable claims]
ScanFiles --> Extract
FindAll --> Extract
Extract --> Classify[Classify each claim]
Classify --> Spawn[Spawn verification agents]For each claim, determine:
Spawn @fact-checker agents in parallel waves of 5.
Each agent receives:
CLAIM: {exact claim text}
SOURCE_FILE: {file containing the claim, with line numbers}
PRIMARY_SOURCE: {URL or command to check against}
VERIFICATION_METHOD: {WebFetch | WebSearch | CLI command | gh API}
FALSIFICATION_CRITERIA: {what would disprove this}flowchart TD
Start([Claims extracted]) --> Count{How many claims?}
Count -->|1-5| Wave1[Wave 1 — all in parallel]
Count -->|6-10| Split2[Wave 1 — first 5, Wave 2 — remaining]
Count -->|11+| SplitN[Waves of 5, sequential]
Wave1 --> Collect[Collect verdicts]
Split2 --> SeqWaves[Execute waves sequentially]
SeqWaves --> Collect
SplitN --> SeqWaves
Collect --> Report[Generate report]Each agent returns a structured verdict:
CLAIM: {the claim being checked}
VERDICT: VERIFIED | REFUTED | INCONCLUSIVE
EVIDENCE:
- Source: {URL or command}
- Retrieved: {date}
- Content: {relevant excerpt from primary source}
EXPLANATION: {how the evidence supports the verdict}
CITATION: {formatted citation for backlog/docs update}Each agent MUST apply CoVe before finalizing:
This prevents the agent from confirming its own bias in a single lookup.
After all waves complete:
# Fact Check Report
**Date**: {YYYY-MM-DD}
**Scope**: {backlog item title | plugin path | all-unverified}
**Claims checked**: {N}
## Summary
| Verdict | Count |
|---------|-------|
| VERIFIED | {N} |
| REFUTED | {N} |
| INCONCLUSIVE | {N} |
## Verdicts
### Claim 1: {claim text}
**Verdict**: {VERIFIED|REFUTED|INCONCLUSIVE}
**Source**: {primary source URL}
**Evidence**: {excerpt}
**Citation**: {formatted citation}
### Claim 2: ...
## Backlog Updates
{For each REFUTED or VERIFIED claim, the specific edit to make to the per-item file}uv run prek run --files .claude/backlog/git add .claude/backlog/ && git commit -m "docs(backlog): fact-check {N} claims ({date})"git push -u origin HEADFor INCONCLUSIVE claims, add a note to the backlog item describing what additional verification is needed.
8ea4dbe
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.