Expert communication craftsperson for morphir-dotnet. Master of Hugo/Docsy, Mermaid/PlantUML diagrams, and technical writing. Use when user asks to create documentation, update docs, write tutorials, create diagrams, fix Hugo issues, customize Docsy, validate examples, check links, enforce style guide, or solve communication challenges. Triggers include "document", "docs", "README", "tutorial", "example", "API docs", "style guide", "link check", "hugo", "docsy", "diagram", "mermaid", "plantuml", "visual", "navigation".
62
73%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./.claude/skills/technical-writer/SKILL.mdQuality
Discovery
92%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-structured description with strong trigger terms and clear 'what/when' guidance. The main weakness is the use of fluff language ('Expert communication craftsperson', 'Master of') which adds no selection value, and some generic trigger terms that could conflict with other documentation skills. The project-specific scoping to 'morphir-dotnet' helps with distinctiveness but could be emphasized more strongly.
Suggestions
Remove fluff phrases like 'Expert communication craftsperson' and 'Master of' — these add no discriminative value and waste space. Replace with more concrete capability statements.
Strengthen distinctiveness by leading with the project scope, e.g., 'Creates and maintains documentation for the morphir-dotnet project using Hugo/Docsy...' to make the project-specific nature more prominent for skill selection.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: create documentation, update docs, write tutorials, create diagrams, fix Hugo issues, customize Docsy, validate examples, check links, enforce style guide. These are concrete, actionable capabilities. | 3 / 3 |
Completeness | Clearly answers both 'what' (documentation creation, Hugo/Docsy work, diagrams, technical writing for morphir-dotnet) and 'when' with an explicit 'Use when...' clause listing specific trigger scenarios, plus a separate 'Triggers include' list. | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms users would say, explicitly listing keywords like 'document', 'docs', 'README', 'tutorial', 'API docs', 'hugo', 'docsy', 'diagram', 'mermaid', 'plantuml', 'link check', 'style guide'. These are terms users would naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | While it specifies 'morphir-dotnet' as the project scope and mentions Hugo/Docsy specifically, terms like 'document', 'docs', 'tutorial', 'README' are very generic and could easily conflict with other documentation-related skills. The project-specific scoping helps but the broad documentation triggers create overlap risk. | 2 / 3 |
Total | 11 / 12 Passed |
Implementation
55%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is highly actionable with excellent workflow clarity—decision trees, playbooks, and concrete commands are well-structured. However, it is severely bloated, inlining hundreds of lines of reference material, patterns, FAQ, and explanatory content that should be split into separate files. The monolithic structure undermines token efficiency and progressive disclosure, making it a poor fit for context-window-constrained usage.
Suggestions
Split the Pattern Catalog, FAQ, Decision Trees, and Playbooks into separate referenced files (e.g., patterns.md, faq.md, decision-trees.md, playbooks.md) and keep SKILL.md as a concise overview with links.
Remove explanations of concepts Claude already knows: what Mermaid diagram types exist, what XML doc comments are, basic Hugo/markdown syntax. Instead, provide only project-specific conventions and constraints.
Cut the motivational closing, 'Continuous Improvement' section, 'Integration with Other Skills' section, and the 'Primary Responsibilities' / 'Core Competencies' headers—these are descriptive rather than instructive.
Provide the referenced bundle files (scripts like link-validator.fsx, example-freshness.fsx) or remove references to them to avoid broken expectations.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~500+ lines. Explains concepts Claude already knows (what Mermaid diagram types are, what Hugo is, basic markdown syntax, what XML doc comments look like). The 'Brand Identity' section, motivational closing quote, integration descriptions, and extensive FAQ sections add significant bloat. Much of this is reference material that Claude can derive from context. | 1 / 3 |
Actionability | Provides fully executable bash commands, complete code examples (Mermaid, XML docs, Hugo config, markdown), concrete patterns with copy-paste ready templates, and specific file paths. The playbooks contain step-by-step instructions with actual commands to run. | 3 / 3 |
Workflow Clarity | Playbooks are well-sequenced with numbered steps, validation checkpoints (e.g., 'Validate: Test all code examples, run link checker, preview in Hugo server'), checklists, and decision trees for troubleshooting. The Hugo troubleshooting playbook includes explicit error capture, fix, and rebuild-verify steps. | 3 / 3 |
Progressive Disclosure | Monolithic wall of text with everything inlined into a single massive file. References automation scripts in `.claude/skills/technical-writer/scripts/` but no bundle files are provided. The pattern catalog, FAQ, decision trees, playbooks, and reference material should be split into separate files with clear navigation from a concise overview. No bundle structure supports this content. | 1 / 3 |
Total | 8 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (1000 lines); consider splitting into references/ and linking | Warning |
Total | 10 / 11 Passed | |
7c0c06d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.