Constraint-based metabolic modeling (COBRA). FBA, FVA, gene knockouts, flux sampling, SBML models, for systems biology and metabolic engineering analysis.
81
73%
Does it follow best practices?
Impact
85%
1.30xAverage score across 6 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/cobrapy/SKILL.mdQuality
Discovery
82%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong, domain-specific description that effectively lists concrete capabilities and uses natural terminology that practitioners would recognize. Its main weakness is the absence of an explicit 'Use when...' clause, which would help Claude know precisely when to select this skill. The technical specificity and distinctiveness are excellent for a niche scientific computing skill.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about metabolic modeling, flux balance analysis, constraint-based reconstruction, or working with SBML/genome-scale models.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: FBA, FVA, gene knockouts, flux sampling, SBML models. These are well-defined technical operations in the metabolic modeling domain. | 3 / 3 |
Completeness | Clearly answers 'what' (FBA, FVA, gene knockouts, flux sampling, SBML models) and implies 'when' (systems biology and metabolic engineering analysis), but lacks an explicit 'Use when...' clause with trigger guidance, which caps this at 2 per the rubric. | 2 / 3 |
Trigger Term Quality | Includes strong natural keywords users in this domain would use: COBRA, FBA, FVA, gene knockouts, flux sampling, SBML, systems biology, metabolic engineering. These cover the key terms and acronyms a user would naturally mention. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive niche domain. COBRA, FBA, FVA, SBML, and metabolic modeling are very specific terms unlikely to conflict with other skills. This is a clearly defined technical specialty. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a comprehensive and highly actionable skill with excellent executable code examples covering the full COBRApy API surface. Its main weaknesses are verbosity (could be 40-50% shorter by removing explanatory prose Claude doesn't need), lack of validation checkpoints in workflows, and a monolithic structure that would benefit from splitting detailed content into referenced files. The referenced bundle files don't exist, undermining the progressive disclosure strategy.
Suggestions
Add validation checkpoints to workflows: check `solution.status == 'optimal'` after every `optimize()` call, and add error handling patterns for infeasible/unbounded cases.
Trim explanatory prose throughout - remove sentences like 'COBRApy is a Python library for...' and 'essential for systems biology research', and cut the Key Concepts section which explains things Claude already knows.
Move detailed sections (Model Building, Production Envelopes, Gapfilling, Key Concepts) into the referenced files (`references/workflows.md`, `references/api_quick_reference.md`) and keep SKILL.md as a concise overview with the most common operations.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is fairly comprehensive but includes some unnecessary explanations (e.g., 'essential for systems biology research', explaining what DictList objects are, explaining what exchange reactions are). Several sections like Key Concepts explain things Claude would already know. The file is quite long (~300 lines) and could be tightened significantly by removing explanatory prose and keeping just the code patterns. | 2 / 3 |
Actionability | Excellent actionability throughout - nearly every section contains fully executable, copy-paste ready Python code with concrete examples. Function signatures include real parameters, model IDs are specific, and code blocks are complete and runnable. | 3 / 3 |
Workflow Clarity | Five named workflows are provided with clear sequential steps, but they lack validation checkpoints. For example, none of the workflows check solution.status before using results, and there are no error recovery loops. The troubleshooting section is vague ('check medium constraints') rather than providing concrete diagnostic steps. | 2 / 3 |
Progressive Disclosure | References to `references/workflows.md` and `references/api_quick_reference.md` are mentioned at the bottom, but no bundle files exist to support them. The main file is monolithic at ~300 lines, with detailed API reference content (model building, key concepts, all analysis types) that could be split into referenced files. The structure would benefit from a leaner overview with more content pushed to reference files. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_version | 'metadata.version' is missing | Warning |
Total | 10 / 11 Passed | |
cbcae7b
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.