CtrlK
BlogDocsLog inGet started
Tessl Logo

llm-selector

Skill de recomendacao de nivel de modelo LLM por tipo de tarefa e complexidade. Sugere qual nivel usar e, se o ambiente suportar, pode sugerir acao manual correspondente. Nao troca o modelo programaticamente. Use quando precisar balancear custo, latencia e profundidade de raciocinio.

79

Quality

73%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/16-llm-selector/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

75%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description adequately covers the 'what' and 'when' with an explicit use-case trigger, making it complete. However, it could be more specific about concrete actions and include more natural trigger terms that users would actually say when needing model selection help. The description is in Portuguese which is fine, but the trigger terms could be expanded.

Suggestions

Add more specific concrete actions like 'analyzes task complexity', 'compares cost vs capability tradeoffs', 'recommends opus/sonnet/haiku based on requirements'

Expand trigger terms to include natural user phrases like 'which model', 'model selection', 'cheaper option', 'faster model', 'simple task', 'complex reasoning'

DimensionReasoningScore

Specificity

Names the domain (LLM model level recommendation) and describes some actions (suggests which level to use, may suggest corresponding manual action), but lacks concrete specific actions like 'analyze task complexity', 'compare latency metrics', or 'estimate token costs'.

2 / 3

Completeness

Clearly answers both what (recommends LLM model level by task type and complexity, suggests manual actions) AND when ('Use quando precisar balancear custo, latencia e profundidade de raciocinio') with an explicit trigger clause.

3 / 3

Trigger Term Quality

Includes some relevant terms like 'custo' (cost), 'latencia' (latency), 'raciocinio' (reasoning), 'modelo LLM', but misses natural user phrases like 'which model should I use', 'model selection', 'cheaper model', 'faster response', or 'task routing'.

2 / 3

Distinctiveness Conflict Risk

Has a clear niche focused specifically on LLM model level recommendation based on task complexity and cost/latency tradeoffs. This is distinct enough that it's unlikely to conflict with other skills.

3 / 3

Total

10

/

12

Passed

Implementation

72%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is well-structured and concise, effectively communicating model level selection criteria without unnecessary verbosity. The main weaknesses are the lack of concrete worked examples showing the recommendation process and missing explicit workflow steps for how to arrive at a recommendation. The output format is clear but would benefit from example inputs and outputs.

Suggestions

Add 2-3 concrete examples showing input scenarios and corresponding recommendations (e.g., 'Task: rename variables in 5 files → Recommendation: Rapido')

Include a brief numbered workflow: 1. Identify task type, 2. Assess complexity factors, 3. Check upgrade/downgrade triggers, 4. Output recommendation

Add a validation step to verify the recommendation matches the actual task requirements before finalizing

DimensionReasoningScore

Conciseness

The content is lean and efficient, using tables and bullet points effectively. No unnecessary explanations of concepts Claude already knows; every section serves a clear purpose.

3 / 3

Actionability

Provides clear criteria for level selection and a concrete output format, but lacks executable examples or specific scenarios showing the recommendation process in action.

2 / 3

Workflow Clarity

The upgrade/downgrade rules and skill mappings provide guidance, but there's no explicit step-by-step workflow for making a recommendation decision. Missing validation checkpoints for verifying the recommendation is appropriate.

2 / 3

Progressive Disclosure

Well-organized with clear sections and appropriate references to external policies. Content is appropriately scoped for a single skill file with one-level-deep references to governance documents.

3 / 3

Total

10

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
felvieira/claude-skills-fv
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.