Automatically applies when choosing LLM models and providers. Ensures proper model comparison, provider selection, cost optimization, fallback patterns, and multi-model strategies.
Install with Tessl CLI
npx tessl i github:majiayu000/claude-skill-registry-data --skill model-selection67
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
57%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description establishes a clear domain around LLM model and provider selection, making it distinctive. However, it lacks concrete action specificity (listing categories rather than specific tasks) and misses an explicit 'Use when...' clause with natural trigger terms users would actually say when needing this skill.
Suggestions
Add a 'Use when...' clause with explicit triggers like 'Use when the user asks about choosing between models, comparing API costs, setting up fallbacks, or mentions specific providers like OpenAI, Anthropic, or Azure'.
Include more natural user terms such as 'API', 'GPT', 'Claude', 'token pricing', 'rate limits', 'which model should I use'.
Make capabilities more concrete, e.g., 'Compare token costs across providers, configure automatic fallback chains, select optimal models for specific tasks'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (LLM models/providers) and lists some actions (model comparison, provider selection, cost optimization, fallback patterns, multi-model strategies), but these are somewhat abstract categories rather than concrete specific actions like 'compare token costs' or 'configure retry logic'. | 2 / 3 |
Completeness | The 'what' is addressed with the list of capabilities, but the 'when' clause ('Automatically applies when choosing LLM models and providers') is vague and doesn't provide explicit user-facing triggers. There's no 'Use when...' clause with specific scenarios or user phrases. | 2 / 3 |
Trigger Term Quality | Includes relevant terms like 'LLM models', 'providers', 'cost optimization', and 'fallback patterns', but misses common natural variations users might say such as 'API', 'OpenAI', 'Claude', 'GPT', 'which model', 'cheapest model', or 'rate limits'. | 2 / 3 |
Distinctiveness Conflict Risk | The focus on LLM model selection, provider comparison, and multi-model strategies creates a clear niche that is unlikely to conflict with other skills. The domain is specific enough to avoid overlap with general coding or configuration skills. | 3 / 3 |
Total | 9 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides highly actionable, production-ready code for LLM model selection and provider management. Its main weakness is verbosity - the extensive inline code examples, while useful, make the skill lengthy and could benefit from being split into referenced files. The workflow guidance lacks explicit validation steps for confirming routing and fallback behavior works correctly.
Suggestions
Split the major components (ModelRegistry, ModelRouter, FallbackChain, CostOptimizer, ModelEnsemble) into separate reference files, keeping only concise usage examples in SKILL.md
Add validation checkpoints to the Auto-Apply workflow, e.g., 'Test routing rules with sample prompts before deployment' and 'Verify fallback chain triggers correctly with simulated failures'
Trim verbose docstrings and obvious parameter descriptions - Claude understands what 'model_id: str' means without 'Model identifier' documentation
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill provides extensive code examples that are useful, but includes some redundancy (e.g., full Pydantic models with obvious fields, verbose docstrings explaining obvious parameters). The content could be tightened by 30-40% while preserving all actionable information. | 2 / 3 |
Actionability | Excellent executable code throughout - complete Python classes with type hints, working implementations of ModelRegistry, ModelRouter, FallbackChain, CostOptimizer, and ModelEnsemble. All code is copy-paste ready with clear usage examples. | 3 / 3 |
Workflow Clarity | The Auto-Apply section provides a 7-step workflow, but lacks explicit validation checkpoints. For operations like model routing and fallback chains that can fail silently or produce unexpected results, there are no verification steps or feedback loops to confirm correct behavior. | 2 / 3 |
Progressive Disclosure | The skill references related skills at the end but the main content is monolithic - over 400 lines of code in a single file. The Model Registry, Router, Fallback Chain, Cost Optimizer, and Ensemble patterns could each be separate reference files with SKILL.md providing a concise overview. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (713 lines); consider splitting into references/ and linking | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.