Zero-context verification that every number, comparison, and scope claim in the paper matches raw result files. Uses a fresh cross-model reviewer with NO prior context to prevent confirmation bias. Use when user says "审查论文数据", "check paper claims", "verify numbers", "论文数字核对", or before submission to ensure paper-to-evidence fidelity.
83
81%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Verify that every claim in the paper matches raw evidence for: $ARGUMENTS
The executor writes experiments AND writes the paper. It "knows" what the results should be. This creates confirmation bias:
A fresh reviewer with zero prior context catches these because it has no expectations — it just compares paper text vs raw files.
| Skill | Question it answers |
|---|---|
/experiment-audit | Is the experiment code honest? (fake GT, normalization fraud) |
/result-to-claim | Does the data scientifically support this claim? |
/paper-claim-audit | Does the paper report the data truthfully and precisely? |
Zero-context, fresh reviewer. The auditor receives ONLY:
It does NOT receive:
This is stricter than reviewer-independence — it's zero-context evidence audit.
Locate paper and result files WITHOUT reading or interpreting them.
Paper files (claims) — paths shown relative to the shell's working
directory so you can find them with ls; when writing them into
audited_input_hashes, use paths relative to the paper dir (no paper/
prefix) per the "Submission Artifact Emission" section below:
paper/main.tex # → hash key: main.tex
paper/sections/*.tex # → hash key: sections/*.tex
paper/tables/*.tex (if separate) # → hash key: tables/*.texResult files (evidence):
results/*.json, results/*.jsonl, results/*.csv, results/*.tsv
outputs/*.json, outputs/*.csv
wandb-summary.json (if exists)
**/metrics.json, **/eval_results.json
**/config.yaml, **/args.json (experiment configs)Exclude (no summaries, no interpretations):
EXPERIMENT_LOG.md, EXPERIMENT_TRACKER.md, AUTO_REVIEW*.md
NARRATIVE_REPORT.md, PAPER_PLAN.md, findings.md
Any .md file that is an executor-written summaryCRITICAL: Use mcp__codex__codex (new thread), NEVER mcp__codex__codex-reply. Every run must be a fresh context.
mcp__codex__codex:
model: gpt-5.4
config: {"model_reasoning_effort": "xhigh"}
prompt: |
You are a paper-to-evidence auditor. You have ZERO prior context about
this research. You will receive only paper source files and raw result
files. Your job is to verify that every number in the paper exactly
matches the raw evidence.
Paper files to read:
[list .tex file paths]
Result files to read:
[list .json/.csv/.yaml file paths]
## Audit Protocol
### A. Extract Every Quantitative Claim
For each number, percentage, comparison, or scope statement in the paper:
- Location (section, table, caption, or inline text)
- Exact claim text
- The number or comparison being made
### B. Trace Each Claim to Evidence
For each extracted claim, find the supporting raw data:
- Which result file contains this number?
- What is the EXACT value in that file?
- Match status: exact_match / rounding_ok / mismatch
### C. Check These Specific Failure Modes
1. **Number inflation**: Paper says 85.3%, raw file says 84.7%
Rule: only standard rounding to displayed precision is allowed
2. **Best-seed cherry-pick**: Paper says "achieves 90.2%" but
that's the best of 5 seeds; mean is 87.1%
Rule: check if paper specifies "average" / "best" / "median"
3. **Config mismatch**: Paper compares Method A vs Baseline B,
but they used different hyperparameters / datasets / splits
Rule: verify config files show same settings for compared methods
4. **Aggregation mismatch**: Paper says "average over 5 seeds"
but result files show only 3 runs
Rule: count actual runs vs claimed count
5. **Delta error**: Paper says "improves by 15%" but
actual delta is (85.3 - 73.1) / 73.1 = 16.7%
Rule: verify arithmetic of all relative improvements
6. **Caption-table mismatch**: Figure caption describes
something different from what the figure/table actually shows
Rule: cross-check every caption against its content
7. **Scope overclaim**: Paper says "consistently outperforms"
but only tested on 2 datasets
Rule: check if language matches actual evaluation scope
## Output Format (per claim)
For each claim, report:
- claim_id: sequential number
- location: section/table/figure
- paper_text: exact quote from paper
- paper_value: the number claimed
- evidence_file: which raw file
- evidence_value: the actual number
- status: exact_match | rounding_ok | ambiguous_mapping |
missing_evidence | config_mismatch | aggregation_mismatch |
number_mismatch | scope_overclaim | unsupported_claim
- details: explanation if not exact_match
Overall verdict: PASS | WARN | FAILParse the reviewer's response and write PAPER_CLAIM_AUDIT.md:
# Paper Claim Audit Report
**Date**: [today]
**Auditor**: GPT-5.4 xhigh (fresh zero-context thread)
**Paper**: [paper title from tex]
## Overall Verdict: [PASS | WARN | FAIL]
## Claims Verified: [N total]
- exact_match: [count]
- rounding_ok: [count]
- ambiguous_mapping: [count]
- missing_evidence: [count]
- mismatch: [count]
## Issues Found
### [FAIL/WARN] Claim #N: [description]
- **Location**: Section X / Table Y / Figure Z
- **Paper says**: "..."
- **Evidence shows**: ...
- **Status**: [status]
- **Fix**: [specific correction needed]
## All Claims (detailed)
| # | Location | Paper Value | Evidence Value | Status |
|---|----------|-------------|---------------|--------|
| 1 | Table 2 | 85.3% | 85.28% | rounding_ok |
| 2 | Abstract | "15% improvement" | 12.8% | number_mismatch |
| ... |Also write PAPER_CLAIM_AUDIT.json for machine consumption.
📋 Paper Claim Audit Complete
Claims verified: 24
exact_match: 18
rounding_ok: 3
ambiguous: 1
⚠️ mismatch: 2
Overall: ⚠️ WARN
See PAPER_CLAIM_AUDIT.md for details./paper-write — first check before improvement loop/auto-paper-improvement-loop — recheck if improvement loop changed numbers/auto-paper-improvement-loop (if exists)if PAPER_CLAIM_AUDIT.json exists:
read mismatched claims
fix them as priority items in the improvement roundSame pattern as /experiment-audit:
PASS → continue normallyWARN → print warning, continue, flag draft as "check numbers before submission"FAIL → print alert, continue, but do NOT mark as submission-readycodex-reply. Never carry context.After each mcp__codex__codex or mcp__codex__codex-reply reviewer call, save the trace following shared-references/review-tracing.md. Use tools/save_trace.sh or write files directly to .aris/traces/<skill>/<date>_run<NN>/. Respect the --- trace: parameter (default: full).
This skill always writes paper/PAPER_CLAIM_AUDIT.json, regardless of
caller or detector outcome. A detector-negative run (paper has no numeric
claims) emits verdict NOT_APPLICABLE; a paper-with-numeric-claims-but-no-
raw-results run emits BLOCKED. Silent skip is forbidden — paper-writing
Phase 6 and tools/verify_paper_audits.sh both rely on this artifact
existing at a predictable path.
The artifact conforms to the schema in shared-references/assurance-contract.md:
{
"audit_skill": "paper-claim-audit",
"verdict": "PASS | WARN | FAIL | NOT_APPLICABLE | BLOCKED | ERROR",
"reason_code": "all_numbers_match | rounding_drift | missing_raw_results | ...",
"summary": "One-line human-readable verdict summary.",
"audited_input_hashes": {
"main.tex": "sha256:...",
"sections/5.evidence.tex": "sha256:...",
"/abs/path/to/results/run_2026_04_19.json": "sha256:..."
},
"trace_path": ".aris/traces/paper-claim-audit/<date>_run<NN>/",
"thread_id": "<codex mcp thread id>",
"reviewer_model": "gpt-5.4",
"reviewer_reasoning": "xhigh",
"generated_at": "<UTC ISO-8601>",
"details": {
"total_claims": <int>,
"mismatches": [ ... per-claim issue records ... ],
"result_files": [ ... raw files consulted ... ]
}
}audited_input_hashes scopeHash the declared input set passed into this audit invocation — i.e. the
exact .tex files and raw result / config files this run read — not a
repo-wide union and not the reviewer's self-reported subset. If a caller
passed only main.tex + a single result file, hash those two files and no
others. The external verifier rehashes these entries; any mismatch flags
STALE.
Path convention (must match what tools/verify_paper_audits.sh
expects): keys are paths relative to the paper directory (the arg
passed to the verifier) for in-paper files — so main.tex, not
paper/main.tex — and absolute paths for out-of-paper files such as
external results/ dirs. The verifier resolves relative entries via
os.path.join(paper_dir, key); prefixing with paper/ produces
paper/paper/main.tex and false-fails as STALE.
| Input state | Verdict | reason_code example |
|---|---|---|
| No numeric claims detected in paper | NOT_APPLICABLE | no_numeric_claims |
| Numeric claims detected, no raw result files found | BLOCKED | no_raw_evidence |
| All claims reconcile to raw data | PASS | all_numbers_match |
| Minor rounding drift only, no material mismatch | WARN | rounding_drift |
| Any material mismatch (wrong number, config mismatch) | FAIL | claim_mismatch |
| Reviewer invocation failed (network / malformed) | ERROR | reviewer_error |
Every invocation uses a fresh mcp__codex__codex thread. Never
codex-reply. Do not accept prior audit outputs (PROOF_AUDIT, CITATION_AUDIT,
EXPERIMENT_LOG, AUTO_REVIEW summaries) as input to this audit — the fresh
thread preserves reviewer independence per
shared-references/reviewer-independence.md.
paper/PAPER_CLAIM_AUDIT.md is written alongside the JSON for readers.
The JSON is authoritative for tools/verify_paper_audits.sh; the Markdown
is for humans. The parent skill (paper-writing Phase 6) plus the verifier
decide whether the verdict blocks finalization — this skill itself never
blocks; it only emits.
2028ac4
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.