CtrlK
BlogDocsLog inGet started
Tessl Logo

moa-explainer

Generate 3D animation scripts and lay explanations for drug mechanisms.

38

Quality

23%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Academic Writing/moa-explainer/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

40%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies a clear and distinctive niche—3D animation scripts for drug mechanisms—but suffers from a lack of explicit trigger guidance ('Use when...') and insufficient natural keywords that users might employ. The specificity of actions is moderate, naming two outputs but not elaborating on formats, tools, or scope.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about visualizing drug mechanisms, creating pharmacology animations, or explaining how a medication works.'

Include more natural trigger terms and variations such as 'mechanism of action', 'MOA', 'pharmacology', 'medical animation', 'patient education', 'drug visualization'.

Expand the list of concrete actions, e.g., 'Generate 3D animation scripts, storyboards, and plain-language explanations for drug mechanisms of action, including receptor binding, signaling pathways, and therapeutic effects.'

DimensionReasoningScore

Specificity

Names the domain (drug mechanisms, 3D animation) and two actions (generate scripts, lay explanations), but lacks detail on what kinds of scripts, what format, or what 'lay explanations' entail concretely.

2 / 3

Completeness

Describes what the skill does but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, a missing 'Use when...' clause caps completeness at 2, and since the 'what' is also only moderately clear, this scores a 1.

1 / 3

Trigger Term Quality

Includes some relevant keywords like '3D animation', 'drug mechanisms', and 'scripts', but misses common variations users might say such as 'pharmacology', 'MOA', 'mechanism of action', 'medical animation', 'drug visualization', or 'patient education'.

2 / 3

Distinctiveness Conflict Risk

The combination of 3D animation scripts specifically for drug mechanisms is a very narrow niche that is unlikely to conflict with other skills. This is a distinctive and specialized domain.

3 / 3

Total

8

/

12

Passed

Implementation

7%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is almost entirely generic boilerplate with virtually no domain-specific content about generating 3D animation scripts or drug mechanism explanations. The few domain-relevant sections (Parameters, Returns, Use Cases) are skeletal and lack any concrete examples, templates, or executable guidance. The bulk of the document is filled with generic risk assessments, security checklists, lifecycle metadata, and template workflows that could apply to any skill and waste significant token budget.

Suggestions

Replace generic workflow steps with a concrete, domain-specific workflow: e.g., Step 1: Identify drug target and binding mechanism, Step 2: Generate animation storyboard with specific scene descriptions, Step 3: Write voiceover script at appropriate audience level, with actual examples for each step.

Add a complete input/output example showing a real drug (e.g., PD-1 inhibitor) with the expected animation storyboard format, voiceover script, and simplified explanation so Claude knows exactly what to produce.

Remove all generic boilerplate sections (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria, Response Template) that add no task-specific value and consume significant token budget.

Provide concrete templates or schemas for the claimed outputs (animation storyboard, voiceover script, key visual concepts, simplified explanation) so Claude knows the exact format expected.

DimensionReasoningScore

Conciseness

Extremely verbose and padded with boilerplate sections (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria) that add no actionable value for Claude. Many sections are generic templates with no domain-specific content. Self-referential links like 'See ## Prerequisites above' and 'See ## Workflow above' add confusion. The actual domain-specific content (Parameters, Returns, Use Cases) is buried under layers of generic scaffolding.

1 / 3

Actionability

Despite claiming to generate 3D animation scripts and lay explanations for drug mechanisms, there is zero concrete guidance on how to actually do this. No example animation script, no storyboard format, no voiceover script template, no code showing how scripts/main.py works. The 'Example' section is just 'PD-1 inhibitor mechanism for patient education' with no input/output demonstration. The workflow steps are entirely generic and could apply to any skill.

1 / 3

Workflow Clarity

The workflow is entirely generic ('Confirm the user objective', 'Validate that the request matches the documented scope', 'Use the packaged script path') with no domain-specific steps for creating animation storyboards or drug mechanism explanations. There are no validation checkpoints specific to the task, no feedback loops for reviewing animation accuracy or medical correctness, and no concrete sequence for producing the claimed outputs.

1 / 3

Progressive Disclosure

There is some structure with sections and a reference to references/audit-reference.md, but the content is poorly organized with redundant sections (Quick Check and Audit-Ready Commands contain the same commands, Implementation Details and Workflow overlap significantly). The document is monolithic with too many generic sections inline rather than being concise with pointers to detailed materials.

2 / 3

Total

5

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.