CtrlK
BlogDocsLog inGet started
Tessl Logo

figure-reference-checker

Use figure reference checker for academic writing workflows that need structured execution, explicit assumptions, and clear output boundaries.

33

Quality

17%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Academic Writing/figure-reference-checker/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

7%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description fails to communicate what the skill concretely does, relying on an opaque name and abstract process language instead of listing specific actions. It lacks natural trigger terms users would employ and does not clearly answer either 'what does this do' or 'when should Claude use it' in a meaningful way. The only slight positive is that 'figure reference checker' hints at a specific niche.

Suggestions

Replace abstract language with concrete actions, e.g., 'Checks that all figures referenced in text (e.g., Figure 1, Fig. 2) have corresponding figure captions, and flags missing or duplicate figure references.'

Add a 'Use when...' clause with natural trigger terms like 'figure references, cross-references, figure numbering, manuscript review, LaTeX figures, academic paper.'

Remove process-oriented fluff ('structured execution, explicit assumptions, clear output boundaries') that describes how the skill works internally rather than what it does for the user.

DimensionReasoningScore

Specificity

The description does not list any concrete actions. 'Figure reference checker' names a tool but 'structured execution, explicit assumptions, and clear output boundaries' are abstract process descriptors, not specific capabilities like 'validates figure numbering' or 'detects missing cross-references'.

1 / 3

Completeness

The 'what' is extremely vague—it never explains what the skill actually does beyond the name 'figure reference checker.' The 'when' clause ('academic writing workflows that need structured execution...') is present but so abstract it provides no actionable trigger guidance.

1 / 3

Trigger Term Quality

The only natural keyword is 'figure reference checker' and 'academic writing,' but terms like 'structured execution,' 'explicit assumptions,' and 'clear output boundaries' are not phrases users would naturally say. Missing natural terms like 'figure numbers,' 'cross-references,' 'LaTeX,' 'manuscript,' etc.

1 / 3

Distinctiveness Conflict Risk

'Figure reference checker' is a somewhat specific niche that distinguishes it from general writing or coding skills, but the vague qualifiers about 'structured execution' and 'explicit assumptions' could cause confusion about when to select this over other academic writing tools.

2 / 3

Total

5

/

12

Passed

Implementation

27%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is heavily padded with generic boilerplate (risk assessment, security checklists, lifecycle status, evaluation criteria) that provides no task-specific value and wastes token budget. The core functionality—checking figure reference consistency—is barely explained with no example inputs, outputs, or concrete detection logic. Circular internal references ('See ## X above') create confusion rather than clarity.

Suggestions

Remove all generic boilerplate sections (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria) and circular self-references to reduce the skill to under 50 lines focused on the actual task.

Add a concrete example showing sample input (a manuscript snippet with figure references) and expected output (detected orphaned references, inconsistencies) so Claude knows exactly what to produce.

Consolidate the redundant workflow descriptions (Example Usage run plan, Implementation Details, Workflow section) into a single clear numbered sequence with explicit validation checkpoints.

Remove the 'See ## X above for related details' cross-references that point to sections appearing later in the document—either reorder sections logically or eliminate the references entirely.

DimensionReasoningScore

Conciseness

Extremely verbose and repetitive. Multiple sections reference each other circularly ('See ## Features above', 'See ## Prerequisites above', 'See ## Usage above'). Contains extensive boilerplate (security checklists, lifecycle status, evaluation criteria, risk assessment) that adds no actionable value. The core task—checking figure references in manuscripts—could be conveyed in under 30 lines.

1 / 3

Actionability

Provides some concrete commands (python scripts/main.py --manuscript paper.docx, py_compile check) and a parameter table, but the actual logic of what the script does or how to interpret its output is never shown. No example output, no sample input/output pair, and the workflow steps are generic process descriptions rather than executable guidance.

2 / 3

Workflow Clarity

A numbered workflow exists with steps for validation, execution, and fallback, and error handling is documented. However, there are no explicit validation checkpoints with concrete commands between steps (e.g., how to verify the output is correct), and the workflow steps are abstract ('Validate that the request matches the documented scope') rather than specific to figure reference checking.

2 / 3

Progressive Disclosure

The document contains circular self-references ('See ## Features above for related details' placed before the Features section actually appears). Content is a monolithic wall of boilerplate sections. The single external reference (references/audit-reference.md) is fine, but the internal organization is poor with redundant sections (Quick Check vs Audit-Ready Commands, multiple overlapping workflow descriptions).

1 / 3

Total

6

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.