CtrlK
BlogDocsLog inGet started
Tessl Logo

fact-check

Verify claims in backlog items, skill documentation, or plugin content against primary sources using web lookups. Spawns parallel verification agents that MUST use WebFetch/WebSearch/gh — training data recall is explicitly rejected as evidence. Produces VERIFIED/REFUTED/INCONCLUSIVE verdicts with citations. Triggers on "fact check", "verify claims", "check against primary sources", or when backlog items are marked UNVERIFIED.

87

Quality

85%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an excellent skill description that clearly communicates what the skill does (verify claims via web lookups producing verdicts with citations), how it works (parallel verification agents using specific tools), and when to use it (explicit trigger phrases and contextual triggers). It uses third-person voice throughout and provides distinctive, concrete details that would make it easy to select from a large pool of skills.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: verify claims against primary sources, spawn parallel verification agents, use WebFetch/WebSearch/gh tools, produce VERIFIED/REFUTED/INCONCLUSIVE verdicts with citations. Very detailed about the mechanism and outputs.

3 / 3

Completeness

Clearly answers both 'what' (verify claims using web lookups, spawn parallel agents, produce verdicts with citations) and 'when' (explicit triggers: 'fact check', 'verify claims', 'check against primary sources', or when items are marked UNVERIFIED).

3 / 3

Trigger Term Quality

Includes natural trigger terms users would say: 'fact check', 'verify claims', 'check against primary sources', and the contextual trigger 'UNVERIFIED'. These are terms users would naturally use when needing this capability.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive niche — fact-checking/verification with explicit rejection of training data recall, specific output format (VERIFIED/REFUTED/INCONCLUSIVE), and clear domain (backlog items, skill documentation, plugin content). Unlikely to conflict with other skills.

3 / 3

Total

12

/

12

Passed

Implementation

70%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured skill with excellent workflow clarity and progressive disclosure. Its main weakness is in actionability — while the process is thoroughly described, the actual mechanism for spawning verification agents and performing web lookups lacks concrete, executable examples. The conciseness could be improved by trimming the mermaid diagrams and tightening some of the explanatory sections.

Suggestions

Add a concrete example showing an actual WebFetch or WebSearch tool call with real parameters, demonstrating how a verification agent would check a specific claim end-to-end.

Replace or supplement the mermaid flowcharts with concise numbered steps — the flowcharts consume significant tokens without adding clarity beyond what sequential text provides.

Show a complete worked example: a sample claim → the exact tool invocations used → the evidence retrieved → the final verdict, so Claude can pattern-match on real execution.

DimensionReasoningScore

Conciseness

The skill is mostly efficient and avoids explaining concepts Claude already knows, but the mermaid diagrams add visual overhead without much value in a text-consumed context, and some sections (like the 'When NOT to Use' and the wave execution flowchart) could be tightened. The evidence rules section is well-structured but slightly verbose with the explicit 'NOT valid evidence' list that largely restates common sense for Claude.

2 / 3

Actionability

The skill provides structured templates (verdict format, agent prompt, report format) and specific commands for post-actions, but the core verification process relies on spawning '@fact-checker' agents without defining how to actually invoke them (no concrete tool calls, no executable code). The claim extraction and verification steps are described procedurally but lack copy-paste-ready commands for the actual web lookups.

2 / 3

Workflow Clarity

The multi-step process is clearly sequenced: claim extraction → classification → wave spawning → CoVe verification → verdict collection → report generation → post-actions. The Chain of Verification (CoVe) requirement serves as an explicit validation/feedback loop, and the wave execution pattern provides clear batching logic. Post-actions include linting and committing with specific commands.

3 / 3

Progressive Disclosure

The skill is well-organized with clear sections, appropriate use of headers, and references to related skills at the bottom that are one level deep and clearly signaled. The content is appropriately scoped for a single SKILL.md file without being monolithic, and the references section provides clear navigation to related capabilities.

3 / 3

Total

10

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
Jamie-BitFlight/claude_skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.