CtrlK
BlogDocsLog inGet started
Tessl Logo

generate-thesaurus

Generate controlled vocabulary thesaurus for content domains. Creates comprehensive thesauri with preferred terms, broader/narrower/related terms.

49

Quality

37%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/generate-thesaurus/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

40%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies a clear and distinctive niche—controlled vocabulary thesaurus generation—with some domain-specific terminology. However, it lacks an explicit 'Use when...' clause, which is critical for skill selection, and could benefit from more natural trigger terms and additional concrete actions beyond basic generation.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user needs a controlled vocabulary, taxonomy, or thesaurus for organizing content, metadata, or information architecture.'

Include additional natural trigger terms users might say, such as 'taxonomy', 'SKOS', 'synonym ring', 'term hierarchy', 'metadata vocabulary', or 'information architecture'.

List more specific concrete actions, e.g., 'defines term hierarchies, maps synonyms and related concepts, exports in standard formats like SKOS or CSV'.

DimensionReasoningScore

Specificity

Names the domain (controlled vocabulary thesaurus) and some actions (creates thesauri with preferred terms, broader/narrower/related terms), but doesn't list multiple distinct concrete actions beyond generation—e.g., no mention of exporting formats, validation, or integration.

2 / 3

Completeness

Describes what the skill does but has no explicit 'Use when...' clause or equivalent trigger guidance, which per the rubric caps completeness at 2, and the 'what' itself is only moderately detailed, placing this at the low end. Per rubric rules, missing 'Use when' caps at 2, but the 'what' is also thin enough to warrant a score of 1.

1 / 3

Trigger Term Quality

Includes relevant terms like 'controlled vocabulary', 'thesaurus', 'preferred terms', 'broader/narrower/related terms', but misses common user variations such as 'taxonomy', 'SKOS', 'term hierarchy', 'synonym ring', or 'metadata vocabulary'.

2 / 3

Distinctiveness Conflict Risk

Controlled vocabulary thesaurus creation is a very specific niche; terms like 'thesaurus', 'broader/narrower/related terms', and 'preferred terms' are highly distinctive and unlikely to conflict with other skills.

3 / 3

Total

8

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides a reasonable conceptual framework for thesaurus generation but lacks actionable, executable guidance—there are no scripts, commands, or concrete implementation details that Claude could follow. The workflow is logically sequenced but missing validation steps and feedback loops. The content explains concepts Claude likely already understands while omitting the specific technical implementation it would actually need.

Suggestions

Add concrete, executable code or specific tool commands for each workflow step (e.g., a Python script to scan files, extract terms via NLP/regex, and output structured thesaurus data).

Include validation checkpoints such as verifying extracted term counts, reviewing ambiguous terms, and validating output format correctness before finalizing.

Remove explanations of basic thesaurus concepts (BT/NT/RT definitions) that Claude already knows, and replace with implementation-specific details like file parsing strategies or term frequency thresholds.

Add a concrete end-to-end example showing input directory structure, intermediate extracted terms, and final output in at least one format.

DimensionReasoningScore

Conciseness

The skill includes some unnecessary explanation (e.g., explaining what preferred/broader/narrower/related terms are with examples is somewhat helpful but Claude already understands thesaurus construction concepts). The 'Quick Reference' section adds minimal value. Could be tightened.

2 / 3

Actionability

The skill describes a process abstractly but provides no executable code, commands, or concrete implementation. 'Analyze the content' and 'scanning documentation files' are vague directions with no specific tools, scripts, or algorithms. The example output shows format but not how to produce it.

1 / 3

Workflow Clarity

Steps are listed in a logical sequence (analyze → extract → generate), but there are no validation checkpoints, no feedback loops for verifying term accuracy or completeness, and no error handling. For a multi-step content analysis task, the absence of verification steps is a notable gap.

2 / 3

Progressive Disclosure

The content is reasonably structured with clear sections, but everything is inline in a single file. There are no references to external files for detailed guidance (e.g., ISO 25964 standards details, format-specific templates, or advanced configuration). The content length is moderate but could benefit from splitting format-specific details.

2 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
dandye/ai-runbooks
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.