Systematically deconstruct written content into verifiable claims, validate each using search/documentation, and facilitate informed discussion through structured interviewing.
Help writers produce high-integrity content by identifying and validating every factual claim, opinion, and assertion in an article. Systematically deconstructs content into verifiable components, validates each claim, and facilitates informed discourse through structured interviewing.
.agent/docs/verification-report-{timestamp}.md.agent/focus.mdBreak content into granular, verifiable units for systematic analysis.
| Category | Description | Example |
|---|---|---|
FACTUAL | Verifiable against objective evidence | "Python was released in 1991" |
EMPIRICAL | Claims about measurable phenomena, scientific findings | "Studies show X increases Y by 30%" |
OPINION | Author's interpretation or stance | "React is the best framework" |
EXPERT_CLAIM | Requires domain expertise to verify | "This architecture prevents race conditions" |
LOGICAL | Causal claims, conditional statements | "If X then Y because Z" |
For each claim, record:
### Claim #{N}
**Original text:** "[exact quote from article]"
**Section:** [paragraph/heading reference]
**Category:** [FACTUAL/EMPIRICAL/OPINION/EXPERT_CLAIM/LOGICAL]
**Atomic statement:** [simplified, verifiable form]Search for evidence to support, contradict, or contextualize each extracted claim.
For each claim:
Search for evidence
Apply logical analysis
Assign verification status
| Status | Meaning |
|---|---|
✅ VERIFIED | Evidence strongly supports the claim |
⚠️ PARTIALLY_VERIFIED | Mixed evidence, context-dependent, or nuanced |
❌ CONTRADICTED | Evidence contradicts the claim |
❓ UNVERIFIABLE | Cannot find sufficient evidence either way |
⏳ OUTDATED | Information was once accurate but circumstances changed |
🚩 SUSPICIOUS | Extraordinary claim, low-credibility source, or logical inconsistency |
For each claim, record:
Opinion vs Fact:
Causal Claims:
Outdated Information:
Conflicting Evidence:
Produce structured markdown report documenting all findings.
# Article Verification Report
*Article: [Title or description]*
*Analyzed: [Date]*
*Agent: agent-ops-article-verification*
---
## Executive Summary
| Metric | Count | Percentage |
|--------|-------|------------|
| Total claims extracted | {N} | 100% |
| ✅ Verified | {N} | {%} |
| ⚠️ Partially verified | {N} | {%} |
| ❌ Contradicted | {N} | {%} |
| ❓ Unverifiable | {N} | {%} |
| ⏳ Outdated | {N} | {%} |
| 🚩 Suspicious | {N} | {%} |
**Overall assessment:** [Brief credibility summary]
---
## Claim-by-Claim Analysis
### Claim #1: "[Exact text from article]"
**Category:** [FACTUAL/EMPIRICAL/OPINION/EXPERT_CLAIM/LOGICAL]
**Extracted from:** [Section/paragraph reference]
**Atomic statement:**
> [Simplified, verifiable form]
**Status:** [✅/⚠️/❌/❓/⏳/🚩] [STATUS_NAME]
**Findings:**
- Evidence supporting: [specific data points, citations]
- Evidence contradicting: [specific data points, citations]
- Source evaluation: [credibility assessment]
- Confidence level: [High/Medium/Low]
**References:**
1. [Source title](URL) — accessed [date]
2. [Source title](URL) — accessed [date]
**Author Notes:** [Space for clarifications added during interview]
---
### Claim #2: ...
[Repeat structure for each claim]
---
## Verification Statistics
- **Claim complexity distribution:** [simple/moderate/complex counts]
- **Primary evidence types used:** [news/academic/official records/other]
- **Major evidence gaps:** [list areas where verification was impossible]
- **Topics requiring expert review:** [if any]
---
## Interview Notes
[Added during Phase 4 — records of author clarifications and updates]
---
## Appendix: Methodology
This report was generated using the agent-ops-article-verification skill, which:
1. Extracts atomic claims from source text
2. Categorizes claims by verifiability type
3. Searches for supporting/contradicting evidence
4. Documents findings with source citations
5. Facilitates author interview for clarificationsSave to: .agent/docs/verification-report-{YYYYMMDD-HHMMSS}.md
Facilitate informed discussion with the author about findings, allowing clarification, additional evidence, and consensus on revisions.
Use agent-ops-interview skill for structured one-question-at-a-time dialogue.
"I've extracted and verified {N} claims from your content. Let's go through them systematically. For each claim, I'll show what I found, then ask about your sources or reasoning. You can clarify, provide additional context, or revise claims. Ready?"
For each claim with status ❌ CONTRADICTED, 🚩 SUSPICIOUS, or ⚠️ PARTIALLY_VERIFIED:
Present the finding:
**Claim #{N}:** "[claim text]"
**Status:** [status with explanation]
**Evidence found:** [brief summary]Ask ONE clarifying question (examples):
Record response in the report's "Author Notes" section
Update status if warranted based on new information
Proceed to next claim only after current one is resolved
| Author says | Action |
|---|---|
| Provides new source | Verify source, update findings, potentially upgrade status |
| Explains context | Add context to Author Notes, reassess if relevant |
| Agrees with contradiction | Mark for revision, note in report |
| Defends claim | Note defense, keep original status with author's reasoning |
| "Skip" | Mark as "Author declined to address", move on |
| "Stop" | End interview, save progress, note incomplete |
After all claims reviewed:
Present summary of changes needed:
## Interview Summary
Claims requiring revision: {N}
- Claim #3: [brief issue]
- Claim #7: [brief issue]
Claims clarified (no revision needed): {N}
Remaining unresolved: {N}Ask for confirmation:
"Does this summary accurately capture our discussion? Any final clarifications?"
Update report with all interview notes
Save final report
.agent/docs/Use available MCP tools for evidence gathering:
mcp_context7_query-docs — for library/framework documentationmcp_microsoftdocs_microsoft_docs_search — for Microsoft/Azure contentfetch_webpage — for general web contentsemantic_search — for workspace contextInvoke agent-ops-interview skill for Phase 4 structured dialogue.
Update .agent/focus.md after each phase:
## Just did
- Article verification Phase {N} for "{content title}"
- Claims extracted: {N}
- Status: {summary}
## Doing now
- Phase {N+1}: {description}
## Next
- {remaining phases}User input:
"Verify this article: Python is the most popular programming language in 2024, with over 50% market share. It was created by Guido van Rossum in 1989 and is used by 90% of data scientists."
Phase 1 output:
### Claim #1
**Original:** "Python is the most popular programming language in 2024"
**Category:** EMPIRICAL
**Atomic:** Python ranks #1 in programming language popularity metrics for 2024
### Claim #2
**Original:** "with over 50% market share"
**Category:** EMPIRICAL
**Atomic:** Python has >50% market share among programming languages
### Claim #3
**Original:** "created by Guido van Rossum in 1989"
**Category:** FACTUAL
**Atomic:** Guido van Rossum created Python in 1989
### Claim #4
**Original:** "used by 90% of data scientists"
**Category:** EMPIRICAL
**Atomic:** 90% of data scientists use PythonPhase 2 would then verify each claim with evidence search.
2bbaa03
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.