CtrlK
BlogDocsLog inGet started
Tessl Logo

citation-audit

Zero-context verification that every bibliographic entry in the paper is real, correctly attributed, and used in a context the cited paper actually supports. Uses a fresh cross-model reviewer with web/DBLP/arXiv lookup to catch hallucinated authors, wrong years, fabricated venues, version mismatches, and wrong-context citations (cite present but the cited paper does not establish the claim). Use when user says "审查引用", "check citations", "citation audit", "verify references", "引用核对", or before submission to ensure bibliography integrity.

83

Quality

81%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an excellent skill description that clearly defines a specific niche (citation/reference verification for academic papers), lists concrete actions it performs, and provides explicit trigger terms in both English and Chinese. It uses proper third-person voice throughout and includes both keyword-based and contextual triggers ('before submission'). The description is detailed without being padded with fluff.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: catching hallucinated authors, wrong years, fabricated venues, version mismatches, wrong-context citations, and verifying bibliographic entries are real and correctly attributed. Very detailed about what it does.

3 / 3

Completeness

Clearly answers both 'what' (zero-context verification of bibliographic entries, catching hallucinated authors, wrong years, etc.) and 'when' (explicit 'Use when...' clause with specific trigger phrases and the contextual trigger of pre-submission review).

3 / 3

Trigger Term Quality

Includes excellent natural trigger terms in both English and Chinese: '审查引用', 'check citations', 'citation audit', 'verify references', '引用核对', plus contextual trigger 'before submission'. These are terms users would naturally say.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive niche focused specifically on bibliographic/citation verification in academic papers. The specific focus on hallucinated references, cross-model review with DBLP/arXiv lookup, and bilingual trigger terms make it very unlikely to conflict with other skills.

3 / 3

Total

12

/

12

Passed

Implementation

62%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill excels at actionability and workflow clarity, providing concrete executable steps, complete JSON schemas, and explicit validation checkpoints with error recovery paths. However, it is severely undermined by extreme verbosity — the uncited entry detection alone spans ~150 lines across multiple redundant sections, JSON schemas are repeated, and constraints are restated 3-4 times. Much of this content should be factored into referenced files rather than inlined in the SKILL.md body.

Suggestions

Factor the uncited entry detection into a separate UNCITED.md reference file, keeping only a 2-3 line summary and link in the main SKILL.md — the current 5+ sections restating the same opt-in constraints are highly redundant.

Move the full JSON schema and verdict decision table into a referenced SCHEMA.md or the existing shared-references/assurance-contract.md, keeping only a brief example in the body.

Consolidate repeated constraint statements (e.g., 'uncited entries do not change verdict' appears at least 4 times in different sections) into a single authoritative location.

Remove the 'Why opt-in' and 'When opt-in is appropriate' prose sections — Claude does not need persuasive justification for design decisions, only the behavioral rules.

DimensionReasoningScore

Conciseness

The skill is extremely verbose at ~400+ lines. It over-explains opt-in uncited detection with multiple redundant sections (rationale, fallback, effect, when appropriate), repeats the same JSON schema fragments multiple times, and includes extensive prose that Claude does not need (e.g., explaining why opt-in is opt-in, lengthy comparison tables, known limitations that are common sense for an LLM). The uncited entry detection alone has ~5 separate sections restating the same constraints.

1 / 3

Actionability

The skill provides fully concrete, executable guidance: specific MCP invocation syntax with prompt templates, exact shell commands for recompilation, complete JSON schema for output artifacts, explicit file paths, and copy-paste-ready markdown report templates. Every step has specific, actionable instructions rather than vague descriptions.

3 / 3

Workflow Clarity

The 7-step workflow is clearly sequenced with explicit validation checkpoints (Step 7 recompile-and-verify, Step 6 interactive approval for destructive changes). There are feedback loops (fix and re-validate), clear gating (REPLACE/REMOVE require human approval), and a well-defined verdict decision table that maps states to outcomes. The workflow handles error cases (bib unreadable, reviewer failure) with explicit fallback paths.

3 / 3

Progressive Disclosure

The skill references external files (shared-references/review-tracing.md, shared-references/assurance-contract.md, shared-references/reviewer-independence.md, shared-references/citation-discipline.md) and sibling skills appropriately, but the body itself is monolithic — the uncited entry detection protocol, submission artifact emission, and verdict decision table could all be in separate reference files. The inline content is far too long for a SKILL.md overview, with schema definitions repeated and opt-in behavior explained at excessive length.

2 / 3

Total

9

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
wanshuiyin/Auto-claude-code-research-in-sleep
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.