Systematic process for building comprehensive Claude Code skills using parallel research agents. Triggers on "research for skill", "build skill from docs", "create comprehensive skill", or when needing to gather extensive documentation from official sources before skill creation.
95
Quality
95%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Systematic, scalable approach for building comprehensive Claude Code skills using parallel research agents. Use this when a skill requires extensive documentation gathering from official sources.
Stage 1: Initialize → Categorization agent creates TODO checklist
↓
Gate 1: Verify categories are distinct and complete
↓
Stage 2: Research → Parallel agents populate references/{category}/
↓
Gate 2: Anti-hallucination checkpoint (verify all claims cited)
↓
Stage 3: Integrate → Update SKILL.md, validate structure
↓
Gate 3: Final validation (links work, quality standards met)Activate skill-creator for structure guidance:
Skill(skill: "plugin-creator:skill-creator")Read CLAUDE.md for verification requirements
Objective: Create base skill directory and identify documentation categories.
Initialize skill directory:
plugins/plugin-creator/skills/skill-creator/scripts/init_skill.py <skill-name> --path <output-directory>Launch categorization agent - see Agent Prompts
Output: {skill-name}.TODO.md with categorized checklist
Before proceeding, verify:
If categories overlap: Merge or redefine boundaries before Stage 2.
Objective: Launch concurrent research agents to build reference documentation.
{skill-name}.TODO.mdrun_in_background: true./references/{category}/See Research Agent Prompt for template.
Launch all agents in a single message with multiple Task calls:
Agent(subagent_type: "general-purpose", description: "Research Category A", run_in_background: true, ...)
Agent(subagent_type: "general-purpose", description: "Research Category B", run_in_background: true, ...)MANDATORY before Stage 3. For each category, verify:
Citation Format Required:
According to the official documentation (https://example.com/docs, accessed 2026-02-01), ...If citation missing: Research agent must add source or mark as "NOT_VERIFIED: [claim]".
Objective: Update SKILL.md with category links and finalize.
./SKILL.md with links to each category's index.mdRun validation:
plugins/plugin-creator/skills/skill-creator/scripts/package_skill.py <skill-path>Verify:
Fallback strategy when MCP tools not available:
gh): For repository metadata, issues, releasesIf an agent fails or times out:
If official docs are incomplete:
| Tool | Fidelity | Use When |
|---|---|---|
| WebFetch | Low | Scoping only. NEVER for implementation details |
| mcpexa* | Medium | Code snippets, documentation extraction |
| mcpRef* | High | Authoritative, verbatim documentation |
See MCP Tool Usage Guide for details.
| Principle | Rule |
|---|---|
| Progressive Disclosure | SKILL.md ≤5k words; details in references/ |
| Parallel Execution | Launch all category agents in single message |
| Citation Required | Every claim needs source + access date |
| No Training Data | Only document what sources confirm |
| Relative Paths | All links use ./ prefix |
Before finalizing:
./ relative pathsindex.md with working linksWhen Stage 2 (category research) involves 3+ independent categories where findings from one category inform or challenge another, consider agent teams instead of sequential subagents.
A category research workflow is a candidate for agent teams when ALL of these are true:
A category research workflow is NOT a candidate for agent teams when:
See Agent Teams Documentation for complete criteria, architecture, and usage patterns.
SOURCE: Lines 27-39 of agent-teams.md (accessed 2026-02-06)
fd243f9
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.