CtrlK
BlogDocsLog inGet started
Tessl Logo

tech-stack-evaluator

Technology stack evaluation and comparison with TCO analysis, security assessment, and ecosystem health scoring. Use when comparing frameworks, evaluating technology stacks, calculating total cost of ownership, assessing migration paths, or analyzing ecosystem viability.

80

1.26x
Quality

57%

Does it follow best practices?

Impact

95%

1.26x

Average score across 6 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./engineering-team/tech-stack-evaluator/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

92%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong description that clearly articulates specific capabilities and provides explicit trigger guidance via a 'Use when...' clause. The combination of TCO analysis, security assessment, and ecosystem health scoring creates a reasonably distinct identity, though some overlap risk exists with general architecture or planning skills. The trigger terms are natural and cover the key scenarios well.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: technology stack evaluation, comparison, TCO analysis, security assessment, and ecosystem health scoring. These are distinct, well-defined capabilities.

3 / 3

Completeness

Clearly answers both 'what' (technology stack evaluation with TCO analysis, security assessment, ecosystem health scoring) and 'when' (explicit 'Use when...' clause listing five trigger scenarios).

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'comparing frameworks', 'technology stacks', 'total cost of ownership', 'migration paths', 'ecosystem viability'. These cover a good range of natural user queries in this domain.

3 / 3

Distinctiveness Conflict Risk

While the combination of TCO analysis, security assessment, and ecosystem health scoring is fairly distinctive, terms like 'comparing frameworks' and 'evaluating technology stacks' could overlap with general software architecture or development planning skills. The niche is reasonably clear but not fully unique.

2 / 3

Total

11

/

12

Passed

Implementation

22%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill reads more like a product brochure or README than an actionable instruction set. It describes what the evaluator can do and lists available scripts, but never provides concrete workflows, executable examples with expected outputs, or specific scoring methodologies. The core analytical logic is entirely deferred to reference files, leaving the main skill body without enough substance to guide Claude through an actual technology evaluation.

Suggestions

Add a concrete end-to-end workflow showing the sequence of steps for performing a technology comparison (e.g., 1. Define criteria → 2. Run stack_comparator.py with specific args → 3. Validate scores → 4. Generate report), with explicit validation checkpoints.

Include at least one complete input/output example showing a real comparison with actual scoring output, rather than deferring all examples to references/examples.md.

Replace the script --help commands with actual executable usage examples showing real arguments and expected output formats (e.g., JSON schema of the comparison result).

Remove or significantly condense the capabilities table, 'When to Use/NOT to Use' sections, and analysis type token counts, which don't provide actionable guidance.

DimensionReasoningScore

Conciseness

The skill includes some unnecessary content like the capabilities table (which restates what the sections already cover), the 'When to Use / When NOT to Use' sections, and the analysis types section with token counts that don't add actionable value. The table of contents is also redundant for a file this size. However, it's not egregiously verbose.

2 / 3

Actionability

The skill lacks concrete, executable guidance. The 'Quick Start' examples are just natural language prompts, not actionable steps. The scripts section only shows --help commands without demonstrating actual usage with expected outputs. There are no executable code examples, no output formats shown, and no concrete scoring algorithms or formulas—those are deferred to references.

1 / 3

Workflow Clarity

There is no clear multi-step workflow for performing an evaluation. The skill lists capabilities and scripts but never sequences them into a coherent process (e.g., 'first gather inputs, then run comparison, then validate scores, then generate report'). There are no validation checkpoints or feedback loops for any of the analysis types.

1 / 3

Progressive Disclosure

The skill does reference external files (references/metrics.md, references/examples.md, references/workflows.md) which is good progressive disclosure. However, too much critical content appears to be deferred—the main skill body lacks enough substance to be useful on its own, and the references table is minimal without clear signaling of what each contains.

2 / 3

Total

6

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
alirezarezvani/claude-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.