Technical research methodology with YAGNI/KISS/DRY principles. Phases: scope definition, information gathering, analysis, synthesis, recommendation. Capabilities: technology evaluation, architecture analysis, best practices research, trade-off assessment, solution design. Actions: research, analyze, evaluate, compare, recommend technical solutions. Keywords: research, technology evaluation, best practices, architecture analysis, trade-offs, scalability, security, maintainability, YAGNI, KISS, DRY, technical analysis, solution design, competitive analysis, feasibility study. Use when: researching technologies, evaluating architectures, analyzing best practices, comparing solutions, assessing technical trade-offs, planning scalable/secure systems.
72
63%
Does it follow best practices?
Impact
80%
2.05xAverage score across 3 eval scenarios
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/data/0-research/SKILL.mdQuality
Discovery
92%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-structured description that clearly communicates capabilities, phases, and trigger conditions. It excels at listing concrete actions and providing comprehensive trigger terms. The main weakness is its broad scope, which could cause overlap with more specialized skills covering architecture, security, or solution design individually.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: technology evaluation, architecture analysis, best practices research, trade-off assessment, solution design, and explicit phases (scope definition, information gathering, analysis, synthesis, recommendation). | 3 / 3 |
Completeness | Clearly answers both 'what' (technical research methodology with specific phases and capabilities) and 'when' (explicit 'Use when' clause covering researching technologies, evaluating architectures, analyzing best practices, comparing solutions, assessing trade-offs, planning systems). | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural keywords users would say: 'research', 'technology evaluation', 'best practices', 'architecture analysis', 'trade-offs', 'scalability', 'security', 'feasibility study', 'competitive analysis'. These are terms users would naturally use when requesting technical research. | 3 / 3 |
Distinctiveness Conflict Risk | While it targets technical research specifically, the broad scope covering architecture analysis, solution design, best practices, and security could overlap with architecture-specific skills, security analysis skills, or general coding/design skills. Terms like 'solution design' and 'architecture analysis' are quite broad and could trigger conflicts. | 2 / 3 |
Total | 11 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill provides a structured research methodology with some useful concrete details (gemini command, file naming conventions, research call limits) but is significantly over-verbose, explaining many concepts Claude already understands. The massive inline report template inflates the token cost without being split into a referenced file. The workflow phases are reasonable but lack validation checkpoints and feedback loops.
Suggestions
Cut the Quality Standards and Special Considerations sections entirely—these restate general research principles Claude already knows. Fold any truly unique constraints (like checking CVEs for security topics) into the relevant phase.
Extract the report template into a separate REPORT_TEMPLATE.md file and reference it with a single link, reducing the main skill by ~50 lines.
Add explicit validation checkpoints between phases, e.g., 'Before Phase 3: verify you have ≥2 independent sources per key claim; if not, perform additional searches within the 5-call limit.'
Remove motivational/philosophical statements like 'You are not just collecting information, but providing strategic technical intelligence' — these consume tokens without changing behavior.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is highly verbose, explaining general research methodology concepts Claude already knows (how to analyze sources, what cross-referencing means, what accuracy/currency/completeness mean). The report template alone is ~50 lines of boilerplate. Phrases like 'You are not just collecting information, but providing strategic technical intelligence' are motivational padding. The quality standards section restates obvious research principles. | 1 / 3 |
Actionability | There are some concrete, actionable elements: the gemini command syntax, the file path convention for reports, the 5-research-call limit, and the report template structure. However, much of the content is abstract guidance ('Identify common patterns', 'Evaluate pros and cons') rather than executable instructions. No concrete code examples are provided for the actual research workflow. | 2 / 3 |
Workflow Clarity | The four phases provide a clear sequence, and there are some useful specifics (check for gemini command, fallback to WebSearch, save to specific path). However, validation checkpoints are largely missing—there's no explicit step to verify research quality before moving to synthesis, no feedback loop for insufficient findings, and the transition between phases is implicit rather than gated by conditions. | 2 / 3 |
Progressive Disclosure | The content is a monolithic wall of text with no references to external files for the lengthy report template or detailed methodology. The report template (~50 lines) could easily be a separate REPORT_TEMPLATE.md file. There is one reference to a 'docs-seeker' skill but otherwise no navigation structure or content splitting. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
632759f
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.