CtrlK
BlogDocsLog inGet started
Tessl Logo

skill-research-process

Systematic process for building comprehensive Claude Code skills using parallel research agents. Triggers on "research for skill", "build skill from docs", "create comprehensive skill", or when needing to gather extensive documentation from official sources before skill creation.

89

Quality

87%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is strong in completeness and distinctiveness, with explicit trigger terms and a clear 'when to use' clause. The main weakness is that the 'what' portion could be more specific about the concrete actions performed (e.g., spawning research agents, synthesizing findings, generating SKILL.md files). The trigger terms are well-chosen and natural.

Suggestions

Add more specific concrete actions to the 'what' portion, e.g., 'Spawns parallel research agents to gather documentation, synthesizes findings, and generates structured SKILL.md files'

DimensionReasoningScore

Specificity

Names the domain ('building comprehensive Claude Code skills') and mentions 'parallel research agents' and 'gather extensive documentation from official sources', but doesn't list multiple concrete actions beyond the general process description. It's more process-oriented than action-specific.

2 / 3

Completeness

Clearly answers both 'what' (systematic process for building comprehensive Claude Code skills using parallel research agents) and 'when' (explicit triggers listed with 'Triggers on...' clause and a conditional 'when needing to gather extensive documentation').

3 / 3

Trigger Term Quality

Includes explicit trigger phrases like 'research for skill', 'build skill from docs', 'create comprehensive skill' which are natural terms a user would say. Also mentions 'gather extensive documentation from official sources' as a contextual trigger.

3 / 3

Distinctiveness Conflict Risk

Very specific niche combining skill creation, parallel research agents, and documentation gathering. The trigger terms are distinctive and unlikely to conflict with general coding or documentation skills.

3 / 3

Total

11

/

12

Passed

Implementation

85%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured skill with excellent workflow clarity featuring three explicit quality gates with checklists, strong progressive disclosure through external references, and highly actionable guidance with concrete commands and templates. The main weakness is moderate verbosity—the success checklist duplicates quality gate content, and the Agent Team Alternative section could be condensed to a brief reference pointer rather than inline criteria.

Suggestions

Consolidate the Success Checklist with the three Quality Gates to eliminate redundancy—either remove the checklist or make it a brief summary pointing back to the gates.

Condense the Agent Team Alternative section to 2-3 sentences with a reference link, since the detailed criteria are better suited for the referenced agent-teams.md file.

DimensionReasoningScore

Conciseness

The skill is reasonably efficient but includes some redundancy—the success checklist largely repeats the quality gates, and the Agent Team Alternative section adds significant length for what could be a brief reference link. Some tables and principles restate what's already covered in the workflow.

2 / 3

Actionability

Provides concrete, executable commands (init_skill.py, package_skill.py), specific agent launch syntax, citation format templates, and clear MCP tool selection guidance. The steps are specific enough to follow directly.

3 / 3

Workflow Clarity

Excellent multi-stage workflow with three explicit quality gates, each containing verification checklists. Error recovery paths are clearly defined with specific fallback strategies. The ASCII diagram at the top provides a clear overview of the entire process with gate checkpoints.

3 / 3

Progressive Disclosure

SKILL.md serves as a clear overview with well-signaled one-level-deep references to agent-prompts.md, mcp-tools.md, and gaps-analysis.md. Detailed agent prompt templates and MCP tool details are appropriately externalized while the main file stays navigable.

3 / 3

Total

11

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
Jamie-BitFlight/claude_skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.