Comprehensive document creation, editing, and improvement skill for all Markdown-based documentation (technical specs, requirements, ADR, RFC, README, coding rules, articles). Handles complete end-to-end workflows from initial draft to publication-ready documents.
Install with Tessl CLI
npx tessl i github:sc30gsw/claude-code-customes --skill doc-engineer60
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies its domain (Markdown documentation) and lists relevant document types, providing moderate specificity. However, it lacks concrete action verbs, natural trigger terms users would say, and critically missing any 'Use when...' guidance that would help Claude know when to select this skill over others.
Suggestions
Add an explicit 'Use when...' clause with trigger scenarios like 'Use when the user asks to write, draft, or improve documentation, READMEs, technical specs, or any .md files'
Replace vague phrases like 'handles complete end-to-end workflows' with specific actions such as 'generates outlines, structures sections, formats code blocks, creates tables, adds cross-references'
Include natural language variations users would say: 'docs', 'write a doc', 'document this code', 'md file', '.md', 'markdown'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Markdown documentation) and lists document types (technical specs, requirements, ADR, RFC, README, coding rules, articles), but actions are vague ('creation, editing, and improvement', 'handles complete end-to-end workflows') rather than concrete specific actions like 'generate outlines, format tables, add code blocks'. | 2 / 3 |
Completeness | Describes what it does (document creation/editing for Markdown docs) but completely lacks a 'Use when...' clause or any explicit trigger guidance. Per rubric guidelines, missing explicit trigger guidance should cap completeness at 2, and this has no 'when' component at all. | 1 / 3 |
Trigger Term Quality | Includes some relevant keywords users might say (README, RFC, ADR, technical specs, Markdown) but missing common variations like 'docs', 'documentation', 'write a doc', 'md file', '.md', or action-oriented terms like 'draft', 'write', 'document this'. | 2 / 3 |
Distinctiveness Conflict Risk | Somewhat specific to Markdown documentation but 'document creation, editing, and improvement' is broad enough to overlap with general writing skills, code documentation tools, or other content creation skills. The document type list helps but isn't sufficient for clear distinction. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
62%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides comprehensive document engineering guidance with excellent workflow clarity and validation checkpoints. However, it suffers from moderate verbosity and references scripts that appear to be hypothetical rather than actual executable tools. The structure is good but the content length could benefit from better progressive disclosure to separate reference material from the core skill.
Suggestions
Move detailed CLI examples and JSON output schemas to a separate REFERENCE.md file, keeping only quick-start examples in SKILL.md
Clarify whether the referenced Python scripts actually exist or are conceptual - if conceptual, provide the actual implementation or remove the specific command references
Consolidate the repeated document type listings (appears in Overview, Templates Available, and examples) into a single reference table
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is moderately verbose with some unnecessary explanation (e.g., listing all document types multiple times, extensive ASCII diagrams). The workflow decision tree and examples are useful but could be more compact; some sections repeat information that could be consolidated. | 2 / 3 |
Actionability | Provides concrete CLI commands and JSON output examples, but references scripts (template_generator.py, doc_analyzer.py, etc.) that may not exist or aren't provided. The commands look executable but are pseudocode-like since the actual scripts aren't included or verified to exist. | 2 / 3 |
Workflow Clarity | Excellent workflow structure with clear decision tree, numbered phases, and explicit validation steps (Step 4 in each workflow branch validates changes). The improvement workflow includes feedback loops (re-run analyzer to confirm quality increase, check no new issues introduced). | 3 / 3 |
Progressive Disclosure | References external files (templates/, scripts/, quality-rules/) appropriately, but the main SKILL.md is quite long with inline content that could be split. The file structure summary helps navigation, but detailed examples and the full workflow tree could be in separate reference files. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.