Constraint-based metabolic modeling (COBRA). FBA, FVA, gene knockouts, flux sampling, SBML models, for systems biology and metabolic engineering analysis.
81
73%
Does it follow best practices?
Impact
85%
1.30xAverage score across 6 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/cobrapy/SKILL.mdQuality
Discovery
82%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong, domain-specific description that effectively lists concrete capabilities and uses natural terminology that practitioners would recognize. Its main weakness is the lack of an explicit 'Use when...' clause, which would help Claude know precisely when to select this skill. The technical specificity and distinctiveness are excellent.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about metabolic modeling, flux balance analysis, COBRA methods, or working with SBML genome-scale models.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: FBA, FVA, gene knockouts, flux sampling, SBML models. These are well-defined technical operations in the metabolic modeling domain. | 3 / 3 |
Completeness | Clearly answers 'what' (FBA, FVA, gene knockouts, flux sampling, SBML models) and implies 'when' (systems biology and metabolic engineering analysis), but lacks an explicit 'Use when...' clause with trigger guidance. Per rubric, missing explicit trigger guidance caps completeness at 2. | 2 / 3 |
Trigger Term Quality | Includes strong natural keywords users in this domain would use: COBRA, FBA, FVA, gene knockouts, flux sampling, SBML, systems biology, metabolic engineering. These cover the major terms and acronyms a user would naturally mention. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive niche — constraint-based metabolic modeling is a very specific domain. Terms like COBRA, FBA, FVA, SBML are unlikely to conflict with any other skill. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured and highly actionable skill with excellent executable code examples covering the full COBRApy API surface. Its main weaknesses are length/verbosity (could benefit from moving advanced topics to reference files) and missing validation checkpoints in workflows. The content would be stronger if trimmed to essentials in the main file with better progressive disclosure to referenced documents.
Suggestions
Move advanced sections (Gapfilling, Model Building, Production Envelopes) to reference files and link from the main skill to reduce token footprint
Add explicit validation steps to workflows, e.g., checking `solution.status == 'optimal'` after each `model.optimize()` call and handling infeasible cases
Remove introductory/explanatory prose like the Overview paragraph and section lead-ins ('Load existing models from repositories or files:') - the section headers and code speak for themselves
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is fairly comprehensive but includes some unnecessary explanatory text (e.g., 'essential for systems biology research', 'behaving like both lists and dictionaries') and could be tightened. The overview paragraph and some section intros add tokens without adding value for Claude. However, most content is code-focused and reasonably efficient. | 2 / 3 |
Actionability | Excellent actionability throughout - nearly every section provides complete, executable Python code examples with specific function calls, import statements, and realistic parameters. Code is copy-paste ready with concrete model names, reaction IDs, and expected outputs. | 3 / 3 |
Workflow Clarity | The five named workflows provide clear sequences, but they lack validation checkpoints and error handling. For operations like gapfilling, model building, and gene knockouts, there are no explicit validation steps or feedback loops (e.g., checking solution.status after optimize, verifying model consistency after modifications). The troubleshooting section is present but disconnected from the workflows. | 2 / 3 |
Progressive Disclosure | References to external files (references/workflows.md, references/api_quick_reference.md) are present at the bottom, but the main file is quite long (~300+ lines) with extensive inline content that could be split out. The 10 core capability sections plus 5 workflows plus key concepts plus best practices creates a monolithic document where advanced topics like gapfilling and model building could be in separate reference files. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_version | 'metadata.version' is missing | Warning |
Total | 10 / 11 Passed | |
b58ad7e
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.