Systematic cataloging of information assets. Creates comprehensive inventories of all content with metadata and characteristics.
Install with Tessl CLI
npx tessl i github:dandye/ai-runbooks --skill inventory-content39
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
9%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is too abstract and uses corporate jargon rather than natural language users would employ. It lacks explicit trigger guidance and concrete actions, making it difficult for Claude to know when to select this skill over others. The vague terminology ('information assets', 'content') creates high conflict risk with other skills.
Suggestions
Add a 'Use when...' clause with specific trigger scenarios, e.g., 'Use when the user asks to inventory files, create a content catalog, or list assets in a directory'
Replace abstract terms with natural keywords users would say: 'files', 'documents', 'folders', 'organize', 'list', 'index' instead of 'information assets' and 'systematic cataloging'
List specific concrete actions the skill performs, e.g., 'Scans directories, extracts file metadata (size, type, date), generates inventory spreadsheets, tags content by category'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (cataloging information assets) and mentions some actions (creates inventories, metadata, characteristics), but lacks concrete specific actions like 'tag files', 'generate reports', or 'track versions'. | 2 / 3 |
Completeness | Describes what it does (cataloging/inventories) but completely lacks any 'Use when...' clause or explicit trigger guidance for when Claude should select this skill. | 1 / 3 |
Trigger Term Quality | Uses abstract jargon like 'information assets', 'systematic cataloging', and 'comprehensive inventories' - terms users are unlikely to naturally say. Missing natural keywords like 'organize files', 'list documents', 'catalog', 'index'. | 1 / 3 |
Distinctiveness Conflict Risk | 'Information assets' and 'content' are extremely generic terms that could overlap with document management, file organization, data processing, or numerous other skills. | 1 / 3 |
Total | 5 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides a reasonable high-level framework for content inventory but lacks the concrete, executable guidance Claude needs to actually implement it. The workflow structure is clear but missing validation steps and error handling. The biggest weakness is the complete absence of code examples or specific commands for file scanning, metadata extraction, or output generation.
Suggestions
Add executable code examples for each step (e.g., Python using os.walk for discovery, frontmatter parsing for metadata extraction)
Include a concrete example of the output format (sample JSON/CSV structure with actual field names and values)
Add validation checkpoints such as 'verify read permissions before scanning' and 'handle unreadable files gracefully'
Provide specific libraries or tools to use (e.g., python-frontmatter, pathlib) rather than abstract descriptions
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably efficient but includes some unnecessary elaboration like the 'Quick Reference' section that restates purpose, and some verbose descriptions that could be tightened (e.g., 'Create a systematic catalog of information assets within a specified path'). | 2 / 3 |
Actionability | The skill describes what to do at a high level but provides no concrete code, commands, or executable examples. Steps like 'Recursively scan the PATH' and 'extract metadata' are abstract descriptions without implementation guidance. | 1 / 3 |
Workflow Clarity | Steps are clearly numbered and sequenced, but there are no validation checkpoints or error handling guidance. No feedback loops for handling unreadable files, permission errors, or malformed metadata. | 2 / 3 |
Progressive Disclosure | Content is organized into logical sections with clear headers, but everything is inline in one file. For a skill of this complexity, detailed format-specific extraction logic or example outputs could be split into referenced files. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.