Research-before-coding workflow. Search for existing tools, libraries, and patterns before writing custom code. Invokes the researcher agent.
59
59%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Quality
Discovery
50%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description communicates the core concept of a research-first coding workflow but lacks explicit trigger guidance ('Use when...') and comprehensive natural language trigger terms. It's moderately specific but could be much stronger with concrete examples of when to invoke this skill and more user-facing language variations.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks to build something new, wants to find existing libraries or packages, or before writing custom implementations.'
Include more natural trigger terms users would say, such as 'find a library', 'existing package', 'npm/pip/crate for', 'look up solutions', 'avoid reinventing the wheel'.
List more specific concrete actions beyond searching, e.g., 'Evaluates library options, compares alternatives, checks maintenance status and popularity before recommending an approach.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (research-before-coding) and some actions ('search for existing tools, libraries, and patterns'), but doesn't list multiple concrete specific actions beyond searching. 'Invokes the researcher agent' is an implementation detail rather than a user-facing capability. | 2 / 3 |
Completeness | The 'what' is partially addressed (search for existing tools/libraries/patterns before coding), but there is no explicit 'Use when...' clause. The 'when' is only implied by the workflow name rather than explicitly stated with trigger conditions. | 2 / 3 |
Trigger Term Quality | Includes some relevant terms like 'tools', 'libraries', 'patterns', and 'research', but misses common user phrases like 'find a library for', 'is there an existing package', 'look up', 'check if there's already a solution', or 'don't reinvent the wheel'. | 2 / 3 |
Distinctiveness Conflict Risk | The research-before-coding concept is somewhat distinctive, and mentioning the 'researcher agent' helps differentiate it. However, 'search for existing tools, libraries, and patterns' could overlap with general coding assistance or documentation lookup skills. | 2 / 3 |
Total | 8 / 12 Passed |
Implementation
50%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides a solid conceptual framework for research-before-coding workflows with good structure and useful examples. However, it suffers from including information Claude already knows (common tool names), lacks validation/verification steps in the workflow, and the core execution mechanism (Task subagent) is not clearly executable. The content would benefit from being more concise and adding explicit verification checkpoints.
Suggestions
Remove or drastically reduce the 'Search Shortcuts by Category' section—Claude already knows these tools; instead, focus on the search strategy and evaluation criteria.
Add a validation checkpoint after step 4 (DECIDE) or step 5 (IMPLEMENT), such as 'Verify: run tests to confirm the package integrates correctly before proceeding' to create a feedback loop.
Clarify the Task() subagent syntax—is this a real API? Link to documentation or provide the actual mechanism for launching the researcher agent.
Move 'Integration Points' and 'Search Shortcuts' to separate reference files and link to them from the main skill to improve progressive disclosure.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is moderately efficient but includes some unnecessary content. The ASCII art workflow diagram is visually nice but duplicates information found in the decision matrix and 'How to Use' sections. The 'Search Shortcuts by Category' section lists well-known tools Claude already knows about, adding token cost without much value. | 2 / 3 |
Actionability | The skill provides a structured workflow and concrete examples, but the core mechanism—launching a 'researcher agent' via a Task() call—uses a pseudo-API that isn't standard or executable. The quick mode is essentially 'mentally run through' a checklist, which is vague. The examples are illustrative but not copy-paste executable. | 2 / 3 |
Workflow Clarity | The 5-step workflow is clearly sequenced and the decision matrix provides good guidance on outcomes. However, there are no validation checkpoints—no step to verify the chosen package actually works, no feedback loop for when a candidate fails integration testing, and no explicit 'verify before committing' step after installation. | 2 / 3 |
Progressive Disclosure | The content is well-structured with clear headers and sections, but it's quite long for a single file with no references to external documents. The 'Search Shortcuts by Category' and 'Integration Points' sections could be split into separate reference files. The integration points with other agents are mentioned but not linked to their respective skill files. | 2 / 3 |
Total | 8 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
Reviewed
Table of Contents