Guide users through a structured workflow for co-authoring documentation. Use when user wants to write documentation, proposals, technical specs, decision docs, or similar structured content. This workflow helps users efficiently transfer context, refine content through iteration, and verify the doc works for readers. Trigger when user mentions writing docs, creating proposals, drafting specs, or similar documentation tasks.
Install with Tessl CLI
npx tessl i github:boisenoise/skills-collections --skill doc-coauthoring87
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillEvaluation — 91%
↑ 1.59xAgent success when using this skill
Validation for skill structure
Discovery
82%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a solid skill description with excellent trigger term coverage and completeness, explicitly stating both what the skill does and when to use it. The main weakness is the somewhat abstract description of capabilities (workflow-based language rather than concrete actions) and moderate overlap risk with other writing/documentation skills.
Suggestions
Replace abstract workflow language ('transfer context', 'refine content through iteration') with concrete actions like 'create outlines', 'structure sections', 'add examples', 'format for readability'
Add more distinctive elements to reduce conflict risk, such as specifying the collaborative/iterative nature more concretely or listing specific document structures supported
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (documentation) and mentions some actions ('transfer context', 'refine content through iteration', 'verify the doc works'), but these are somewhat abstract rather than concrete specific actions like 'create outlines', 'format sections', or 'add code examples'. | 2 / 3 |
Completeness | Clearly answers both what ('structured workflow for co-authoring documentation', 'transfer context, refine content, verify doc works') AND when ('Use when user wants to write documentation, proposals, technical specs' plus explicit 'Trigger when' clause with specific scenarios). | 3 / 3 |
Trigger Term Quality | Good coverage of natural terms users would say: 'documentation', 'proposals', 'technical specs', 'decision docs', 'writing docs', 'creating proposals', 'drafting specs'. These are terms users would naturally use when requesting help with documentation tasks. | 3 / 3 |
Distinctiveness Conflict Risk | While it specifies documentation types (proposals, specs, decision docs), the term 'structured content' is broad and could overlap with other writing or content creation skills. The workflow-based approach provides some distinction but 'documentation' is a common domain. | 2 / 3 |
Total | 10 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured, highly actionable skill for document co-authoring with clear workflow stages and validation checkpoints. The main weakness is verbosity - the skill could be more concise by removing redundant explanations and potentially splitting detailed procedures into separate reference files. The Reader Testing stage with sub-agent validation is a strong quality assurance mechanism.
Suggestions
Reduce redundancy by consolidating repeated instructions (e.g., 'inform them they can answer in shorthand' appears multiple times)
Consider extracting detailed sub-procedures (artifact management, integration handling, sub-agent testing) into separate reference files to improve progressive disclosure
Tighten phrasing throughout - many instructions could be shortened without losing clarity (e.g., 'Ask if they want to try this workflow or prefer to work freeform' → 'Offer workflow or freeform approach')
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is comprehensive but includes some redundant explanations and could be tightened. Phrases like 'Inform them they can answer in shorthand' and repeated explanations of the same concepts across stages add unnecessary tokens. | 2 / 3 |
Actionability | The skill provides highly concrete, step-by-step guidance with specific actions at each stage. Instructions like 'Generate 5-10 numbered questions' and 'Use str_replace to make edits' are directly executable. | 3 / 3 |
Workflow Clarity | The three-stage workflow is clearly sequenced with explicit exit conditions, transition points, and validation steps (Reader Testing stage). Each stage has numbered steps and clear checkpoints for when to proceed. | 3 / 3 |
Progressive Disclosure | The content is well-structured with clear headers and stages, but it's a monolithic document that could benefit from splitting detailed sub-procedures (like artifact management or integration handling) into separate reference files. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.