CtrlK
BlogDocsLog inGet started
Tessl Logo

meta-results-forest-plot-analyzer

Analyzes forest plots for meta-analysis, generating detailed descriptions and formatting figure legends in Chinese or English. Use when the user wants to interpret a forest plot image, describe its statistical significance (heterogeneity, p-value), and format the output with specific figure legends.

70

Quality

63%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Academic Writing/meta-results-forest-plot-analyzer/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description that clearly defines a narrow, specialized domain (forest plot analysis for meta-analysis) with concrete actions and explicit trigger guidance. It uses appropriate domain-specific terminology that users would naturally employ, and the bilingual aspect (Chinese/English) adds further distinctiveness. The 'Use when...' clause effectively communicates the activation conditions.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: analyzes forest plots, generates detailed descriptions, formats figure legends, interprets statistical significance (heterogeneity, p-value), and supports Chinese or English output.

3 / 3

Completeness

Clearly answers both 'what' (analyzes forest plots, generates descriptions, formats figure legends in Chinese/English) and 'when' (explicit 'Use when...' clause specifying interpretation of forest plot images, statistical significance description, and figure legend formatting).

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'forest plot', 'meta-analysis', 'heterogeneity', 'p-value', 'figure legends', 'statistical significance'. These are domain-specific terms that users in this field would naturally use.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive niche combining forest plots, meta-analysis, bilingual figure legends, and specific statistical metrics. Very unlikely to conflict with other skills due to the narrow domain focus.

3 / 3

Total

12

/

12

Passed

Implementation

27%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill suffers heavily from template bloat — the majority of content is generic boilerplate that applies to any skill and teaches Claude nothing specific about forest plot analysis. The actual useful content (the two-step workflow, formatting rules, and example) is buried among repetitive sections like 'When to Use', 'When Not to Use', 'Validation and Safety Rules', 'Failure Handling', 'Deterministic Output Rules', and 'Completion Checklist' that are all generic filler. The skill would be dramatically improved by stripping it down to just the workflow, formatting rules, and example.

Suggestions

Remove all generic boilerplate sections (When to Use, When Not to Use, Required Inputs, Output Contract, Validation and Safety Rules, Failure Handling, Deterministic Output Rules, Completion Checklist) — these add no skill-specific value and waste tokens.

Provide a concrete, executable command example for format_result.py with actual arguments (e.g., `python scripts/format_result.py --input description.txt --language en --figure-num 2`) instead of just `--help`.

Add explicit validation between the two workflow steps — e.g., verify the LLM description meets minimum word count and contains required statistical elements (I², P-value) before passing to the formatting script.

Consolidate the duplicate validation sections ('Validation Shortcut' and 'Quick Validation') and the self-referential 'See ## Usage above' / 'See ## Workflow above' references into a single coherent flow.

DimensionReasoningScore

Conciseness

Extremely verbose and repetitive. Multiple sections restate the same information (e.g., 'When to Use' and 'When Not to Use' are generic boilerplate, 'Key Features' restates the description verbatim, 'Validation Shortcut' and 'Quick Validation' are near-duplicates). Heavy use of generic template filler that adds no value ('Do not use this skill when the required source data, identifiers, files, or credentials are missing'). Much of this content is obvious to Claude and wastes tokens.

1 / 3

Actionability

The core workflow (Vision LLM analysis + format_result.py script) is somewhat concrete, and the example showing input/output is helpful. However, the actual script invocation lacks real arguments (no concrete command with actual flags), the prompt guidelines are vague ('describe in detail >300 words'), and much of the 'actionable' content is generic boilerplate rather than specific executable guidance for this particular task.

2 / 3

Workflow Clarity

The two-step workflow (Image Analysis → Output Formatting) is clearly sequenced and the formatting rules are specific. However, there are no validation checkpoints between steps, no error recovery for common failures (e.g., what if the image is unreadable, what if I² is not visible), and the 'Validation Shortcut' section is disconnected from the actual workflow. The run plan in Example Usage is generic and doesn't integrate with the actual two-step process.

2 / 3

Progressive Disclosure

The content is a monolithic wall of text with many redundant sections. There are self-referential links ('See ## Usage above', 'See ## Workflow above') that point within the same document rather than to separate files. No content is appropriately split into separate reference files despite the document being very long. The structure is confusing with boilerplate sections interspersed with actual skill-specific content.

1 / 3

Total

6

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.