CtrlK
BlogDocsLog inGet started
Tessl Logo

design-metadata-schema

Design comprehensive metadata frameworks. Develops structured metadata templates and tagging systems.

46

Quality

33%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/design-metadata-schema/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies a clear domain (metadata frameworks and tagging systems) but is too brief and lacks explicit trigger guidance for when Claude should select this skill. It would benefit from listing more specific concrete actions and adding a 'Use when...' clause with natural user trigger terms.

Suggestions

Add a 'Use when...' clause with explicit triggers, e.g., 'Use when the user asks about metadata schemas, tagging taxonomies, content classification systems, or organizing data with structured labels.'

Expand the list of concrete actions, e.g., 'Defines metadata schemas, creates controlled vocabularies, designs tagging taxonomies, maps content classification hierarchies, and establishes naming conventions.'

Include natural keyword variations users might say, such as 'taxonomy', 'categorization', 'content tagging', 'data labeling', or 'classification system'.

DimensionReasoningScore

Specificity

Names the domain (metadata frameworks) and some actions (develops structured metadata templates, tagging systems), but lacks comprehensive detail about specific concrete actions like defining taxonomies, creating controlled vocabularies, or mapping metadata schemas.

2 / 3

Completeness

Describes what the skill does (designs metadata frameworks, develops templates and tagging systems) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, missing 'Use when' caps completeness at 2, and the 'what' is also only moderately detailed, warranting a 1.

1 / 3

Trigger Term Quality

Includes some relevant keywords like 'metadata', 'templates', and 'tagging systems', but misses common user variations such as 'taxonomy', 'tags', 'categorization', 'schema', 'content classification', or 'data labeling'.

2 / 3

Distinctiveness Conflict Risk

The focus on metadata frameworks and tagging systems provides some specificity, but 'metadata' and 'tagging' could overlap with content management, data governance, or taxonomy skills. Without clearer scoping, there is moderate conflict risk.

2 / 3

Total

7

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill reads more like a high-level process description than actionable guidance. It lacks any concrete examples—no sample JSON Schema output, no code snippets, no template to copy and adapt. The workflow is logically sequenced but missing validation/verification steps and feedback loops that would be critical for generating correct schema definitions.

Suggestions

Add a concrete, executable example: include a complete sample JSON Schema output showing Dublin Core-aligned fields with validation rules, so Claude has a copy-paste-ready template.

Add a validation/verification step in the workflow (e.g., 'Validate the generated schema against the JSON Schema meta-schema' with a specific command or code snippet).

Remove obvious explanations (e.g., listing 'Title, Date, Author' as common attributes) and replace with a concrete field mapping table or reference file.

Include at least one complete input-to-output example showing a specific content domain and the resulting schema.

DimensionReasoningScore

Conciseness

The skill is reasonably structured but includes some unnecessary explanation (e.g., listing obvious attributes like 'Title, Date, Author' that Claude already knows). The 'Quick Reference' section adds little value. Could be tightened.

2 / 3

Actionability

The skill is entirely abstract and descriptive—no concrete code, no executable examples, no actual schema template or snippet. It describes what to do ('Define fields', 'Generate the schema') without showing how. There's no copy-paste ready output, no example JSON Schema, no concrete commands.

1 / 3

Workflow Clarity

Steps are listed in a logical sequence, but there are no validation checkpoints or feedback loops. There's no step to verify the generated schema is valid, no error recovery guidance, and no explicit verification that the output conforms to the chosen standard.

2 / 3

Progressive Disclosure

The content is organized into clear sections, but everything is inline with no references to external files for detailed topics like Dublin Core mapping tables, example schemas, or validation rule libraries. The content that exists is relatively short so this is partially acceptable, but the lack of any example outputs or linked references limits discoverability.

2 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
dandye/ai-runbooks
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.