CtrlK
BlogDocsLog inGet started
Tessl Logo

research

Technical research methodology with YAGNI/KISS/DRY principles. Phases: scope definition, information gathering, analysis, synthesis, recommendation. Capabilities: technology evaluation, architecture analysis, best practices research, trade-off assessment, solution design. Actions: research, analyze, evaluate, compare, recommend technical solutions. Keywords: research, technology evaluation, best practices, architecture analysis, trade-offs, scalability, security, maintainability, YAGNI, KISS, DRY, technical analysis, solution design, competitive analysis, feasibility study. Use when: researching technologies, evaluating architectures, analyzing best practices, comparing solutions, assessing technical trade-offs, planning scalable/secure systems.

70

2.05x
Quality

59%

Does it follow best practices?

Impact

80%

2.05x

Average score across 3 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/data/0-research/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

92%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a well-structured description that clearly communicates capabilities, phases, and trigger conditions. Its main strength is the explicit 'Use when' clause and comprehensive keyword list. Its weakness is the breadth of scope—covering everything from competitive analysis to security to architecture—which increases conflict risk with more specialized skills.

Suggestions

Narrow the scope or add boundary conditions to reduce overlap (e.g., 'Use this for research and evaluation phases, not for implementation or hands-on coding')

Differentiate more clearly from architecture design or security-specific skills by emphasizing the research/evaluation nature over the solution design aspect

DimensionReasoningScore

Specificity

The description lists multiple specific concrete actions: technology evaluation, architecture analysis, best practices research, trade-off assessment, solution design, and explicitly names phases (scope definition, information gathering, analysis, synthesis, recommendation).

3 / 3

Completeness

Clearly answers both 'what' (technical research methodology with specific phases and capabilities) and 'when' (explicit 'Use when:' clause covering researching technologies, evaluating architectures, analyzing best practices, comparing solutions, assessing trade-offs, planning systems).

3 / 3

Trigger Term Quality

Excellent coverage of natural keywords users would say: 'research', 'technology evaluation', 'best practices', 'architecture analysis', 'trade-offs', 'scalability', 'security', 'feasibility study', 'competitive analysis'. These are terms users would naturally use when requesting technical research.

3 / 3

Distinctiveness Conflict Risk

While the research methodology focus is somewhat distinct, terms like 'architecture analysis', 'solution design', 'best practices', and 'security' are broad enough to overlap with coding skills, architecture design skills, or security-focused skills. The scope is wide enough that it could conflict with more specialized skills in any of these domains.

2 / 3

Total

11

/

12

Passed

Implementation

27%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill contains some genuinely useful operational details (gemini command usage, file naming conventions, research call limits, docs-seeker integration) but buries them in extensive generic research methodology that Claude already understands. The report template consumes significant token budget and would be better as a separate reference file. The content would benefit greatly from aggressive trimming to focus only on project-specific conventions and tool usage.

Suggestions

Extract the report template into a separate file (e.g., REPORT_TEMPLATE.md) and reference it from the main skill to dramatically reduce token usage.

Remove generic research methodology advice (Phases 1, 3, Quality Standards, Special Considerations) that describes standard analytical thinking Claude already possesses—keep only project-specific constraints and tool instructions.

Add explicit validation checkpoints, e.g., 'After gathering sources, verify you have at least 3 independent sources before proceeding to synthesis' and 'Review report against the template checklist before saving.'

Consolidate the actionable tool-specific instructions (gemini command, WebSearch fallback, docs-seeker, file path conventions, 5-call limit) into a concise quick-reference section at the top of the skill.

DimensionReasoningScore

Conciseness

The skill is highly verbose, explaining general research methodology concepts Claude already knows (how to analyze sources, cross-reference information, assess maturity). Phrases like 'You will analyze gathered information by identifying common patterns' and the extensive quality standards section describe obvious research practices. The report template alone is ~50 lines of boilerplate structure.

1 / 3

Actionability

There are some concrete, actionable elements: the `gemini -m gemini-3-preview-p` command, the file path convention `./plans/<plan-name>/reports/YYMMDD-<your-research-topic>.md`, the 5-research-call limit, and the reference to `docs-seeker` skill. However, much of the content is abstract guidance ('identify common patterns', 'evaluate pros and cons') rather than executable instructions, and the report template is a skeleton without concrete examples.

2 / 3

Workflow Clarity

The four phases provide a clear sequence, and there are some useful specifics like checking for the gemini command availability and falling back to WebSearch. However, validation checkpoints are largely absent—there's no explicit step to verify research quality before moving to synthesis, no feedback loop for insufficient findings, and no checkpoint before report generation to ensure completeness.

2 / 3

Progressive Disclosure

The entire skill is a monolithic wall of text with no references to external files for the detailed report template, quality standards, or special considerations. The massive markdown report template should be in a separate reference file. The only external reference is to `docs-seeker` skill, which is appropriate but insufficient given the content volume.

1 / 3

Total

6

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
majiayu000/claude-skill-registry
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.