CtrlK
CommunityDocumentationLog inGet started
Tessl Logo

tech-stack-evaluator

Comprehensive technology stack evaluation and comparison tool with TCO analysis, security assessment, and intelligent recommendations for engineering teams

Overall
score

32%

Does it follow best practices?

Validation for skill structure

Install with Tessl CLI

npx tessl i github:alirezarezvani/claude-skills --skill tech-stack-evaluator
What are skills?
SKILL.md
Review
Evals

Activation

33%

The description identifies a clear domain (technology evaluation) and mentions specific analysis types (TCO, security), but relies on vague qualifiers ('comprehensive', 'intelligent') rather than concrete actions. The critical weakness is the complete absence of trigger guidance telling Claude when to select this skill, which would make it difficult to choose appropriately from a large skill library.

Suggestions

Add an explicit 'Use when...' clause with trigger scenarios like 'comparing technologies', 'choosing between frameworks', 'evaluating vendor options', or 'making build vs buy decisions'

Replace vague qualifiers ('comprehensive', 'intelligent') with specific capabilities like 'generates comparison matrices', 'calculates 3-year cost projections', or 'identifies security compliance gaps'

Include natural user phrases as trigger terms: 'which database should I use', 'compare React vs Vue', 'technology decision', 'stack recommendation'

DimensionReasoningScore

Specificity

Names the domain (technology stack evaluation) and lists some actions (TCO analysis, security assessment, recommendations), but uses somewhat abstract terms like 'comprehensive' and 'intelligent' that are more marketing language than concrete capabilities.

2 / 3

Completeness

Describes what the skill does but completely lacks a 'Use when...' clause or any explicit trigger guidance. Per rubric guidelines, missing explicit trigger guidance caps completeness at 2, and this has no 'when' component at all.

1 / 3

Trigger Term Quality

Includes some relevant terms like 'technology stack', 'TCO analysis', 'security assessment', but missing common natural variations users might say like 'compare frameworks', 'which tool should I use', 'tech comparison', or 'stack decision'.

2 / 3

Distinctiveness Conflict Risk

The combination of TCO analysis and security assessment provides some distinctiveness, but 'technology stack evaluation' and 'recommendations for engineering teams' are broad enough to potentially overlap with architecture, DevOps, or general technical advisory skills.

2 / 3

Total

7

/

12

Passed

Implementation

7%

This skill content describes an ambitious technology evaluation framework but fails to provide actionable implementation. It reads as a feature specification or product requirements document rather than executable instructions. The extensive descriptions of metrics, capabilities, and best practices explain concepts Claude already understands while omitting the actual code, algorithms, and step-by-step workflows needed to perform evaluations.

Suggestions

Replace the 'Scripts' section with actual executable Python code showing how to perform comparisons, calculate TCO, and generate reports

Add a concrete workflow section with numbered steps: 1) Parse input, 2) Gather data, 3) Calculate scores, 4) Generate report - with validation at each step

Move detailed metrics definitions, best practices, and limitations to separate reference files (METRICS.md, BEST_PRACTICES.md) and keep SKILL.md as a concise overview with quick-start examples

Remove explanatory content about what TCO means, what compliance standards are, and other concepts Claude already knows - focus only on project-specific implementation details

DimensionReasoningScore

Conciseness

Extremely verbose at ~400+ lines with extensive explanations of concepts Claude already knows (what TCO is, what compliance standards mean, basic best practices). The document reads like product documentation rather than actionable instructions.

1 / 3

Actionability

Despite listing many script files (stack_comparator.py, tco_calculator.py, etc.), there is no actual executable code, no concrete algorithms, no real implementation. The 'Scripts' section just names files without showing how to use them or what they contain.

1 / 3

Workflow Clarity

No clear workflow for how to actually perform an evaluation. The document describes capabilities and metrics but never provides a step-by-step process for conducting an analysis. No validation checkpoints or feedback loops for the evaluation process.

1 / 3

Progressive Disclosure

The document has clear section headers and some organizational structure, but it's a monolithic wall of text that should be split into separate reference files. The metrics definitions, best practices, and limitations could all be separate documents referenced from a concise overview.

2 / 3

Total

5

/

12

Passed

Validation

81%

Validation13 / 16 Passed

Validation for skill structure

CriteriaDescriptionResult

description_trigger_hint

Description may be missing an explicit 'when to use' trigger hint (e.g., 'Use when...')

Warning

metadata_version

'metadata' field is not a dictionary

Warning

license_field

'license' field is missing

Warning

Total

13

/

16

Passed

Reviewed

Table of Contents

ActivationImplementationValidation

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.