Systematic cataloging of information assets. Creates comprehensive inventories of all content with metadata and characteristics.
33
17%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/inventory-content/SKILL.mdQuality
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is too vague and abstract to be useful for skill selection. It lacks concrete actions, natural trigger terms, explicit 'when to use' guidance, and any specificity that would distinguish it from other organizational or data management skills. The buzzword-heavy language ('information assets', 'comprehensive inventories') obscures rather than clarifies the skill's purpose.
Suggestions
Specify what types of content or assets are being cataloged (e.g., files, databases, API endpoints, media) and what concrete outputs are produced (e.g., 'Creates spreadsheet inventories listing file names, sizes, types, and locations').
Add an explicit 'Use when...' clause with natural trigger terms users would actually say, such as 'Use when the user asks to list all files, create a content inventory, audit existing assets, or catalog resources in a project.'
Replace abstract jargon like 'information assets' and 'metadata and characteristics' with specific, concrete terms that clearly distinguish this skill from general data management or documentation skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses vague, abstract language like 'information assets', 'comprehensive inventories', and 'metadata and characteristics' without specifying concrete actions or what types of content/assets are involved. | 1 / 3 |
Completeness | The description vaguely addresses 'what' (cataloging information assets, creating inventories) but provides no 'when' clause or explicit trigger guidance for when Claude should select this skill. | 1 / 3 |
Trigger Term Quality | The terms used ('information assets', 'cataloging', 'inventories', 'metadata') are abstract and jargon-heavy. Users are unlikely to naturally say 'catalog my information assets' — they would more likely say 'list my files', 'create an index', or 'inventory my documents'. | 1 / 3 |
Distinctiveness Conflict Risk | The description is extremely generic — 'information assets' and 'content with metadata' could apply to virtually any data management, documentation, or organizational skill, creating high conflict risk with many other skills. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill reads more like a product specification than an actionable instruction set for Claude. While the workflow is logically structured and the inputs/outputs are defined, the complete absence of executable code, concrete examples, or specific tool/library recommendations makes it very difficult for Claude to act on. The skill needs concrete implementation details—actual code for directory scanning, metadata extraction, and report generation.
Suggestions
Add executable code examples for each workflow step (e.g., Python using os.walk for discovery, frontmatter parsing for metadata extraction, and json.dumps/csv.writer for report generation)
Include a concrete example of expected output for at least one format (e.g., a sample JSON inventory report with 2-3 entries)
Add validation checkpoints such as verifying read permissions before scanning, handling unreadable files gracefully, and validating the output report structure before writing
Remove the Quick Reference section and introductory framing text—redirect those tokens toward actual implementation guidance
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably structured but includes some unnecessary framing language ('to support content governance and strategic planning') and explanatory text that doesn't add actionable value. The inputs section with defaults is useful but the Quick Reference section at the end is redundant. | 2 / 3 |
Actionability | The skill describes what to do at a high level but provides zero concrete code, commands, or executable examples. There are no scripts, no API calls, no code snippets showing how to actually scan directories, extract metadata, or generate reports. It reads like a requirements document rather than an instruction set. | 1 / 3 |
Workflow Clarity | Steps are clearly sequenced (discovery → extraction → analysis → report), but there are no validation checkpoints, no error handling guidance, and no feedback loops for when files can't be read or metadata extraction fails. For a batch operation scanning potentially many files, missing validation caps this at 2. | 2 / 3 |
Progressive Disclosure | The content is organized into logical sections with clear headers, but everything is inline in a single file with no references to external resources for detailed guidance on specific formats, metadata extraction techniques, or example outputs. The content could benefit from splitting detailed format-specific extraction guidance into separate references. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
086cbf6
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.