Identify content gaps and organizational opportunities. Analyzes missing content areas, redundancies, and consolidation opportunities.
Install with Tessl CLI
npx tessl i github:dandye/ai-runbooks --skill analyze-content-gaps46
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description provides a basic understanding of the skill's purpose but lacks the explicit trigger guidance essential for Claude to select it appropriately from a large skill set. The capabilities mentioned are somewhat vague ('organizational opportunities') and the absence of natural user keywords and a 'Use when...' clause significantly weakens its effectiveness for skill selection.
Suggestions
Add an explicit 'Use when...' clause with trigger scenarios like 'Use when the user asks about missing documentation, content audits, finding duplicates, or reorganizing content structure'.
Include more natural user keywords such as 'duplicate content', 'what's missing', 'content audit', 'reorganize', 'overlap', or 'documentation gaps'.
Make capabilities more concrete by specifying outputs, e.g., 'Generates gap analysis reports, identifies duplicate sections, recommends content consolidation plans'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (content analysis) and some actions ('Analyzes missing content areas, redundancies, and consolidation opportunities'), but lacks concrete specific actions like 'generates gap reports' or 'creates consolidation plans'. | 2 / 3 |
Completeness | Describes what it does but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, missing explicit trigger guidance caps this at 2, but the 'what' is also weak, warranting a 1. | 1 / 3 |
Trigger Term Quality | Includes some relevant terms like 'content gaps', 'redundancies', 'consolidation', but misses common user phrases like 'duplicate content', 'what's missing', 'overlap', 'reorganize', or 'audit content'. | 2 / 3 |
Distinctiveness Conflict Risk | Somewhat specific to content analysis but could overlap with documentation skills, content management skills, or general organizational tools. 'Content gaps' and 'organizational opportunities' are moderately distinct but not clearly niche. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides a reasonable high-level framework for content gap analysis but lacks the concrete, actionable guidance needed for effective execution. The workflow is conceptually sound but reads more like a process description than executable instructions. The absence of specific tools, example outputs, or validation steps significantly limits its practical utility.
Suggestions
Add a concrete example of the GAP_ANALYSIS_REPORT output format with sample content showing what each section should look like
Provide specific methods or commands for analyzing search logs and support tickets (e.g., grep patterns, data extraction approaches)
Include validation checkpoints such as 'Verify at least N topics identified before proceeding' or criteria for determining analysis completeness
Replace abstract instructions like 'Compare against competitor documentation' with specific actionable steps (e.g., 'List 3-5 competitor doc sites, extract their table of contents, create comparison matrix')
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably efficient but includes some unnecessary structure like the 'Quick Reference' section that repeats information. The workflow steps could be more condensed without losing clarity. | 2 / 3 |
Actionability | The skill provides abstract descriptions rather than concrete, executable guidance. There are no code examples, specific commands, or copy-paste ready templates for the gap analysis report. Phrases like 'Analyze search logs' and 'Compare against competitor documentation' lack specific methods or tools. | 1 / 3 |
Workflow Clarity | Steps are listed in a logical sequence, but there are no validation checkpoints or feedback loops. No guidance on how to verify the analysis is complete or accurate before proceeding to recommendations. | 2 / 3 |
Progressive Disclosure | Content is organized into clear sections, but everything is inline in a single file. For a skill of this complexity, the detailed output format or example reports could be referenced in separate files. The structure is adequate but not optimal. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.