Comprehensive document creation, editing, and improvement skill for all Markdown-based documentation (technical specs, requirements, ADR, RFC, README, coding rules, articles). Handles complete end-to-end workflows from initial draft to publication-ready documents.
54
42%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./.claude/skills/doc-engineer/SKILL.mdQuality
Discovery
50%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a clear domain (Markdown documentation) and enumerates relevant document types, which is helpful for skill selection. However, it lacks an explicit 'Use when...' clause, relies on somewhat vague action verbs ('creation, editing, improvement'), and could conflict with other writing or documentation skills due to its broad scope. Adding concrete actions and explicit trigger guidance would significantly improve its effectiveness.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks to create, edit, or improve Markdown documents such as READMEs, RFCs, ADRs, technical specs, or coding guidelines.'
Replace vague actions like 'handles complete end-to-end workflows' with specific concrete actions such as 'generates document structure, writes section content, adds formatting, inserts code blocks, and produces publication-ready output.'
Include common user-facing trigger terms like '.md files', 'docs', 'write documentation', 'draft a README', or 'markdown formatting' to improve keyword coverage.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Markdown-based documentation) and lists document types (technical specs, requirements, ADR, RFC, README, coding rules, articles), but the actions are vague — 'creation, editing, and improvement' and 'end-to-end workflows' are broad rather than concrete specific actions like 'generate table of contents' or 'format code blocks'. | 2 / 3 |
Completeness | The 'what' is addressed (document creation, editing, improvement for Markdown docs), but there is no explicit 'Use when...' clause or equivalent trigger guidance telling Claude when to select this skill. The 'when' is only implied by the document types listed. | 2 / 3 |
Trigger Term Quality | Includes several useful trigger terms like 'Markdown', 'technical specs', 'README', 'ADR', 'RFC', 'coding rules', and 'articles', but misses common user phrasings like '.md files', 'docs', 'documentation', 'write a doc', 'draft a spec', or 'markdown formatting'. The acronyms ADR and RFC may not match how all users phrase requests. | 2 / 3 |
Distinctiveness Conflict Risk | The focus on Markdown-based documentation with specific document types (ADR, RFC, technical specs) provides some distinctiveness, but 'document creation, editing, and improvement' is broad enough to overlap with general writing skills, code documentation skills, or other document-handling skills. | 2 / 3 |
Total | 8 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is significantly over-engineered and verbose for what it delivers. It spends most of its token budget describing tooling interfaces and output formats rather than providing actionable document engineering guidance. The core value proposition—teaching Claude how to create and improve documents—is buried under layers of CLI documentation for scripts whose existence is uncertain.
Suggestions
Reduce content by 60-70%: Remove explanations of document types Claude already knows, eliminate the detailed JSON output examples, and cut the improvement priority matrix. Focus on what's unique and non-obvious.
Make the skill actionable for Claude's actual capabilities: Instead of referencing external Python scripts that may not exist, provide inline guidance on how Claude should evaluate and improve documents (e.g., specific quality heuristics, section templates as actual markdown).
Add explicit error recovery loops: For each workflow path, specify what to do when validation fails, including retry limits and fallback strategies.
Split into overview + reference files: Move the detailed CLI documentation, quality metrics definitions, and example outputs into separate referenced files, keeping SKILL.md as a concise overview with quick-start guidance.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~300+ lines. Extensively explains concepts Claude already knows (what ADRs are, what READMEs are, what quality metrics mean). The improvement priority matrix, quality metrics explanations, and detailed JSON output examples are largely unnecessary padding. The document repeats similar information across sections (e.g., the workflow decision tree duplicates what's in the Core Capabilities section). | 1 / 3 |
Actionability | Provides concrete CLI commands and example workflows, but references scripts (template_generator.py, doc_analyzer.py, doc_improver.py, doc_validator.py) that may not exist without verification. The examples mix bash and python comments in a way that's more illustrative than executable. The actual document engineering guidance (how to write good sections, what content to include) is abstract rather than concrete. | 2 / 3 |
Workflow Clarity | The workflow decision tree provides clear sequencing for CREATE/EDIT/IMPROVE paths with validation steps mentioned. However, the validation steps lack explicit error recovery loops - for example, what happens when quality score doesn't reach 80 after improvements? The 'Step 4: Validate Changes' sections mention validation but don't specify what to do on failure beyond the IMPROVE path's brief 'Check: No new issues introduced'. | 2 / 3 |
Progressive Disclosure | References external files (quality-rules/, templates/, scripts/) and mentions a file structure, but the SKILL.md itself is a monolithic wall containing extensive inline content that could be split into separate reference files. The quality rules section, detailed JSON output examples, and dependency information would be better as linked references rather than inline content. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
7aff694
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.