Use figure reference checker for academic writing workflows that need structured execution, explicit assumptions, and clear output boundaries.
42
Quality
28%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/Academic Writing/figure-reference-checker/SKILL.mdQuality
Discovery
22%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description fails to explain what the skill actually does - it names the skill but doesn't describe its concrete capabilities. The 'Use when' clause exists but provides abstract criteria ('structured execution, explicit assumptions') rather than practical triggers users would naturally express. The description reads more like a meta-description of how the skill operates rather than what it accomplishes.
Suggestions
Replace abstract language with concrete actions: specify what the checker does (e.g., 'Validates figure references in academic documents, identifies missing or duplicate figure numbers, checks that all figures are referenced in text')
Rewrite the 'Use when' clause with natural trigger terms: 'Use when checking figure references, validating figure numbers, reviewing LaTeX/Word documents for missing figures, or preparing manuscripts for submission'
Add file type context if applicable (e.g., '.tex files', 'Word documents', 'markdown') to improve trigger term coverage
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses vague language like 'structured execution, explicit assumptions, and clear output boundaries' without describing any concrete actions. It doesn't explain what 'figure reference checker' actually does (e.g., validate references, find missing figures, check numbering). | 1 / 3 |
Completeness | The 'what' is extremely weak - it doesn't explain what the skill actually does beyond the name. The 'when' clause exists but is vague ('academic writing workflows that need structured execution') rather than providing explicit, actionable triggers. | 1 / 3 |
Trigger Term Quality | Contains some relevant keywords like 'figure reference checker' and 'academic writing' that users might mention, but lacks common variations users would naturally say such as 'figure numbers', 'citation check', 'missing figures', 'LaTeX figures', or file extensions. | 2 / 3 |
Distinctiveness Conflict Risk | The term 'figure reference checker' provides some specificity, but 'academic writing workflows' is broad and could overlap with other academic writing tools. The abstract qualifiers don't help distinguish it from other skills. | 2 / 3 |
Total | 6 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is heavily over-engineered with excessive boilerplate that obscures the core functionality. The actual task (checking figure references in manuscripts) is buried under layers of meta-documentation about security, lifecycle, evaluation criteria, and response templates. The skill would benefit from dramatic simplification to focus on what Claude actually needs: how to run the script and interpret results.
Suggestions
Remove boilerplate sections (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria) that don't help Claude execute the task - these add ~100 lines without actionable value
Show the actual figure reference checking logic or expected output format so Claude understands what the script produces and how to interpret results
Consolidate the redundant workflow descriptions (Example Usage, Implementation Details, Workflow sections all describe similar steps) into a single clear sequence
Fix the broken self-references ('See ## Features above' when Features is actually below) and reorganize sections in logical order
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with excessive boilerplate, redundant sections (e.g., 'See ## Features above' references that point to content below), and unnecessary meta-documentation like lifecycle status, security checklists, and evaluation criteria that don't help Claude execute the task. | 1 / 3 |
Actionability | Provides some concrete commands (python scripts/main.py --manuscript paper.docx) and a parameters table, but the actual figure reference checking logic is never shown - we only see how to invoke a script without understanding what it does or how to verify results. | 2 / 3 |
Workflow Clarity | Contains a numbered workflow section with steps, but validation checkpoints are vague ('Validate that the request matches the documented scope') rather than concrete. Missing explicit validation of the figure reference checking output itself. | 2 / 3 |
Progressive Disclosure | References external files (references/audit-reference.md, scripts/main.py) appropriately, but the main document is bloated with sections that should either be removed or consolidated. Self-referential 'See ## X above' notes are confusing and poorly organized. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
4a48721
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.