Virtual gene knockout simulation using foundation models to predict transcriptional changes
Install with Tessl CLI
npx tessl i github:aipoch/medical-research-skills --skill in-silico-perturbation-oracle41
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
40%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a specialized bioinformatics capability but suffers from missing trigger guidance and incomplete action specification. While the niche is distinctive, users would benefit from explicit 'Use when...' clauses and more natural language variations of the technical terms.
Suggestions
Add a 'Use when...' clause with trigger terms like 'simulate gene knockout', 'predict gene expression changes', 'in silico perturbation', or 'what happens if I knock out gene X'
Include common user phrasings and synonyms such as 'gene expression prediction', 'perturbation analysis', 'knockout effects', or 'transcriptome simulation'
Expand the capability list to specify concrete outputs (e.g., 'generates predicted expression profiles', 'identifies downstream affected genes', 'compares knockout vs wildtype')
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (gene knockout simulation) and a specific action (predict transcriptional changes), but lacks comprehensive detail about what concrete operations are performed (e.g., which models, what outputs, what analysis steps). | 2 / 3 |
Completeness | Describes what the skill does but completely lacks a 'Use when...' clause or any explicit trigger guidance. Per rubric guidelines, missing explicit trigger guidance caps completeness at 2, and this has no 'when' component at all. | 1 / 3 |
Trigger Term Quality | Contains relevant technical terms like 'gene knockout', 'transcriptional changes', and 'foundation models', but these are specialized jargon. Missing common variations users might say like 'knock out genes', 'gene expression', 'perturbation', or 'in silico knockout'. | 2 / 3 |
Distinctiveness Conflict Risk | Highly specialized niche combining virtual gene knockouts with foundation models for transcriptional prediction. Unlikely to conflict with other skills due to the specific biological/computational domain. | 3 / 3 |
Total | 8 / 12 Passed |
Implementation
14%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill reads more like product documentation or a README than an actionable skill for Claude. It's extremely verbose with marketing-style feature tables, roadmaps, and citations while lacking concrete, executable workflows. The code examples reference a hypothetical package and the document explicitly notes it's a 'framework' requiring external integration, making it minimally actionable.
Suggestions
Remove all boilerplate sections (roadmap, citation, license, risk assessment, lifecycle status) and focus only on actionable instructions Claude needs to perform the task
Create a clear numbered workflow with explicit validation steps: load data → configure model → run prediction → validate results → interpret output
Either provide actual executable code that works with real available packages (scanpy, gseapy) or clearly specify this is a conceptual framework and what Claude should actually do when asked about gene knockout simulation
Move detailed reference content (cell type mappings, output schemas, architecture) to separate linked files and keep SKILL.md as a concise overview with quick-start instructions
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with extensive tables, feature lists, roadmaps, citations, and boilerplate sections that Claude doesn't need. The document explains concepts like what foundation models are and includes marketing-style feature tables rather than actionable instructions. | 1 / 3 |
Actionability | Provides some concrete code examples for CLI and Python API usage, but the code is illustrative rather than executable - it references a hypothetical 'in_silico_perturbation_oracle' package that doesn't exist. The warning note acknowledges this requires external model integration not provided. | 2 / 3 |
Workflow Clarity | No clear workflow sequence for performing perturbation analysis. Steps are scattered across sections without validation checkpoints. The 'Quality Control' section lists checks but doesn't integrate them into a coherent workflow with feedback loops. | 1 / 3 |
Progressive Disclosure | Monolithic wall of text with no references to external files for detailed content. Everything is inline including architecture diagrams, scoring algorithms, validation datasets, roadmaps, and security checklists that should be in separate reference documents. | 1 / 3 |
Total | 5 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.