CtrlK
BlogDocsLog inGet started
Tessl Logo

doc-coauthoring

Guide users through a structured workflow for co-authoring documentation. Use when user wants to write documentation, proposals, technical specs, decision docs, or similar structured content. This workflow helps users efficiently transfer context, refine content through iteration, and verify the doc works for readers. Trigger when user mentions writing docs, creating proposals, drafting specs, or similar documentation tasks.

81

1.60x
Quality

61%

Does it follow best practices?

Impact

90%

1.60x

Average score across 7 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./examples/doc-coauthoring/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

82%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a solid description that excels in completeness and trigger term coverage, with explicit 'Use when' and 'Trigger when' clauses and natural language terms users would actually say. Its main weakness is that the specific capabilities described are somewhat abstract (e.g., 'transfer context', 'refine content through iteration') rather than concrete actions, and the broad scope could cause overlap with other writing-related skills.

Suggestions

Replace abstract phrases like 'transfer context' and 'refine content through iteration' with more concrete actions such as 'create outlines, draft sections, restructure content, add examples'.

Narrow the scope or add distinguishing details about what makes this a 'structured workflow' versus general writing assistance to reduce potential conflicts with other writing skills.

DimensionReasoningScore

Specificity

The description names the domain (documentation co-authoring) and mentions some actions like 'transfer context', 'refine content through iteration', and 'verify the doc works for readers', but these are somewhat abstract rather than concrete specific actions like 'create outlines, draft sections, add diagrams'.

2 / 3

Completeness

Clearly answers both 'what' (structured workflow for co-authoring documentation with context transfer, iterative refinement, and reader verification) and 'when' (explicit 'Use when' and 'Trigger when' clauses with specific trigger scenarios).

3 / 3

Trigger Term Quality

Good coverage of natural terms users would say: 'write documentation', 'proposals', 'technical specs', 'decision docs', 'writing docs', 'creating proposals', 'drafting specs', 'documentation tasks'. These are terms users would naturally use when requesting this kind of help.

3 / 3

Distinctiveness Conflict Risk

While it specifies a structured workflow for documentation co-authoring, the broad scope covering 'documentation, proposals, technical specs, decision docs, or similar structured content' could overlap with general writing skills or individual document-type skills. The 'structured workflow' aspect provides some distinction but the category is still quite broad.

2 / 3

Total

10

/

12

Passed

Implementation

39%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides a well-thought-out collaborative documentation workflow with clear stages, transitions, and feedback loops. However, it suffers from significant verbosity—much of the content is meta-commentary about what to say to the user rather than lean instructions, and the entire workflow is crammed into a single monolithic file. The actionability is moderate: while the process is well-defined, concrete tool invocations and output examples are largely absent.

Suggestions

Reduce verbosity by 50%+: eliminate meta-instructions like 'Announce intention to...' and 'Inform them that...'. Instead, use direct imperatives or example dialogue snippets.

Split into multiple files: create separate files for each stage (e.g., STAGE1_CONTEXT.md, STAGE2_REFINEMENT.md, STAGE3_TESTING.md) and use SKILL.md as a concise overview with navigation links.

Add concrete tool invocation examples: show actual create_file and str_replace syntax with realistic document content rather than abstract descriptions like 'use the appropriate integration'.

Remove explanations Claude can infer: sections like 'Tips for Effective Guidance' contain generic advice (be direct, don't rush, give user agency) that Claude already knows how to do.

DimensionReasoningScore

Conciseness

The skill is extremely verbose at ~300+ lines, with extensive procedural narration that Claude could infer. Phrases like 'Inform them they can answer in shorthand' and 'Announce work will begin on the [SECTION NAME] section' are meta-instructions about what to say rather than lean directives. Much of the content reads like a tutorial explaining the workflow to a human rather than efficient instructions for Claude.

1 / 3

Actionability

The skill provides a clear structured process with specific steps, numbered question counts (5-10, 5-20), and concrete examples of user shorthand ('1: yes, 2: see #channel'). However, it lacks executable code examples, specific tool invocation syntax, and concrete output templates. Instructions like 'use the appropriate integration' and 'invoke a sub-agent' are vague about actual implementation.

2 / 3

Workflow Clarity

The three-stage workflow is clearly sequenced with explicit transition conditions, exit conditions for each stage, and feedback loops (iterative refinement, reader testing that loops back to refinement). The quality checking step after 3 consecutive iterations and the near-completion review at 80% are thoughtful validation checkpoints. The workflow handles branching well (artifacts vs files, sub-agents vs manual testing).

3 / 3

Progressive Disclosure

The entire workflow is a monolithic wall of text in a single file with no references to supporting files. Given the length and complexity (300+ lines covering three major stages with multiple sub-steps each), this content would benefit significantly from being split into separate files for each stage, with the main SKILL.md serving as an overview with navigation links.

1 / 3

Total

7

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
douglasvought/wiggle-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.