CtrlK
BlogDocsLog inGet started
Tessl Logo

meta-analysis-methods-generator

Generates the Methods section for a meta-analysis paper, including search strategy, screening, quality assessment, data extraction, and statistical analysis.

56

Quality

47%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Academic Writing/meta-analysis-methods-generator/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

67%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description is strong in specificity and distinctiveness, clearly enumerating the concrete sub-tasks involved in writing a meta-analysis Methods section. However, it lacks an explicit 'Use when...' clause, which limits its completeness score, and could benefit from additional natural trigger terms that users might employ when requesting this type of content.

Suggestions

Add a 'Use when...' clause such as 'Use when the user asks for help writing the methods section of a meta-analysis, systematic review, or PRISMA-compliant paper.'

Include additional natural trigger terms like 'systematic review', 'PRISMA', 'inclusion/exclusion criteria', 'effect size', or 'literature review methodology' to improve keyword coverage.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: search strategy, screening, quality assessment, data extraction, and statistical analysis. These are well-defined sub-tasks within the meta-analysis Methods section.

3 / 3

Completeness

Clearly answers 'what' (generates the Methods section with specific components), but lacks an explicit 'Use when...' clause or equivalent trigger guidance, which caps this at 2 per the rubric guidelines.

2 / 3

Trigger Term Quality

Includes relevant terms like 'meta-analysis', 'Methods section', 'search strategy', 'screening', and 'statistical analysis', but misses common user variations like 'systematic review', 'PRISMA', 'literature review methods', or 'meta-analysis paper writing'.

2 / 3

Distinctiveness Conflict Risk

Highly specific niche: generating the Methods section specifically for meta-analysis papers. This is unlikely to conflict with other skills due to the narrow domain focus on meta-analysis methodology.

3 / 3

Total

10

/

12

Passed

Implementation

27%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill contains useful domain-specific content (the 6 prompt templates for meta-analysis methods sections, the IO contract, and the quality assessment scale selection logic) buried under layers of generic boilerplate that obscure the actual instructions. The conflation of a text-generation task with a script-execution workflow (`validate_skill.py`) creates fundamental confusion about what Claude should do. Removing the boilerplate and clarifying the execution model would dramatically improve this skill.

Suggestions

Remove all generic boilerplate sections (When to Use, When Not to Use, Failure Handling, Completion Checklist, Deterministic Output Rules, Validation and Safety Rules, Required Inputs) that don't contain skill-specific information — these waste tokens on things Claude already knows.

Clarify the execution model: if this is a text-generation skill using prompt templates, remove all references to `scripts/validate_skill.py` and bash commands; if it genuinely requires a script, explain what the script does and how it relates to the prompts.

Consolidate the three competing workflow descriptions (Example Usage run plan, Workflow section, Recommended Workflow) into a single clear workflow with validation checkpoints (e.g., verify word count minimums, verify language consistency between sections).

Move the detailed prompt templates to a separate file (e.g., PROMPTS.md) and keep SKILL.md as a concise overview with the workflow, IO contract, and references to the templates.

DimensionReasoningScore

Conciseness

Extremely verbose with extensive boilerplate sections (When to Use, When Not to Use, Failure Handling, Completion Checklist, Deterministic Output Rules, etc.) that add no value for Claude. The 'Key Features' section restates the description verbatim. Multiple redundant workflow descriptions exist (Workflow, Recommended Workflow, Example Usage run plan). References to `scripts/validate_skill.py` are confusing since the skill is about generating text, not running a validation script.

1 / 3

Actionability

The prompts/templates section provides concrete prompt templates for each subsection with specific outlines and requirements, which is genuinely useful. However, the skill conflates running a Python script (`validate_skill.py`) with what is fundamentally a text generation task, creating confusion about what Claude should actually execute. The IO contract is clear but the execution path is muddled.

2 / 3

Workflow Clarity

The 6-step workflow in the Workflow section is clearly sequenced with defined inputs for each step, which is good. However, there are no validation checkpoints between steps (e.g., verifying each section meets the 200-word minimum before proceeding), and there are multiple competing workflow descriptions (Example Usage run plan, Recommended Workflow, Workflow) that create ambiguity about which to follow.

2 / 3

Progressive Disclosure

The content is a monolithic wall of text with no references to external files. All prompt templates are inlined, making the document very long. The 'Implementation Details' section says 'See ## Workflow above' which is a self-referential non-reference. Boilerplate sections like Failure Handling, Completion Checklist, and Deterministic Output Rules bloat the file without adding value and should be removed or externalized.

1 / 3

Total

6

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.