Research technical solutions by searching the web, examining GitHub repos, and gathering evidence. Use when exploring implementation options or evaluating technologies.
79
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description has good structure with an explicit 'Use when...' clause that clearly separates capabilities from triggers. However, the actions described are somewhat general ('searching', 'examining', 'gathering') rather than concrete operations, and the trigger terms could be expanded to include more natural user phrases like 'compare libraries' or 'which framework should I use'.
Suggestions
Add more specific concrete actions like 'compare library benchmarks, analyze API documentation, review GitHub issues and stars, evaluate package maintenance status'
Expand trigger terms to include natural user phrases: 'compare libraries', 'which framework', 'tech stack decisions', 'find packages', 'dependency evaluation'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (technical research) and some actions ('searching the web, examining GitHub repos, gathering evidence'), but actions are somewhat general rather than listing multiple concrete specific operations like 'compare library benchmarks, analyze API documentation, review issue trackers'. | 2 / 3 |
Completeness | Clearly answers both what ('Research technical solutions by searching the web, examining GitHub repos, and gathering evidence') AND when ('Use when exploring implementation options or evaluating technologies') with an explicit 'Use when...' clause. | 3 / 3 |
Trigger Term Quality | Includes some relevant keywords ('GitHub repos', 'implementation options', 'technologies') but misses common natural variations users might say like 'compare libraries', 'find packages', 'tech stack', 'dependencies', 'which framework', or 'best practices'. | 2 / 3 |
Distinctiveness Conflict Risk | Somewhat specific to technical research but could overlap with general web search skills or code review skills. The 'GitHub repos' mention helps distinguish it, but 'searching the web' and 'gathering evidence' are generic enough to potentially conflict with other research-oriented skills. | 2 / 3 |
Total | 9 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid research skill with clear, actionable workflows and good structure. The 5-step process with evidence requirements provides useful guardrails. Main weakness is some verbosity in explanations and the output template taking significant space that could be externalized.
Suggestions
Trim explanatory phrases like 'GitHub raw content is often blocked' - let Claude discover this contextually
Consider moving the detailed output format template to a separate RESEARCH_TEMPLATE.md file and referencing it
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Generally efficient but includes some unnecessary explanation like 'GitHub raw content is often blocked' which Claude would discover naturally. The example prompts and some explanatory text could be tightened. | 2 / 3 |
Actionability | Provides concrete, executable bash commands for cloning repos and creating directories. The output format template is copy-paste ready, and the example usage shows a complete workflow with specific tools. | 3 / 3 |
Workflow Clarity | Clear 5-step numbered sequence with explicit checkpoints (evidence requirements before recommending). The process flows logically from search to examination to storage to output, with guidance on handling blocked content. | 3 / 3 |
Progressive Disclosure | Content is well-organized with clear sections, but everything is inline in a single file. For a skill of this length (~80 lines), some content like the output format template could be referenced externally, though the current structure is navigable. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.