CtrlK
BlogDocsLog inGet started
Tessl Logo

meta-manuscript-generator

Generates a first draft of a clinical meta-analysis paper. Input the research report (including Methods and Results sections), language, and title to automatically generate a complete paper draft including Abstract, Introduction, Discussion, and other sections, with automatic PubMed retrieval of relevant references. Suitable for assisting in the writing of systematic reviews and meta-analyses.

60

Quality

51%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Academic Writing/meta-manuscript-generator/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

67%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a solid description with strong specificity and a clear niche in clinical meta-analysis paper generation. Its main weaknesses are the lack of an explicit 'Use when...' clause and incomplete coverage of natural trigger terms users might employ when requesting this type of assistance.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user asks to draft a meta-analysis paper, write a systematic review, or generate academic manuscript sections from research data.'

Expand trigger terms to include common variations like 'systematic review', 'PRISMA', 'literature review', 'research manuscript', 'academic paper', or 'evidence synthesis'.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: generates a first draft, takes research report input, automatically generates Abstract/Introduction/Discussion sections, and performs automatic PubMed retrieval of relevant references. These are concrete, well-defined capabilities.

3 / 3

Completeness

The 'what' is well-covered (generates paper drafts with specific sections and PubMed references), but the 'when' is only implied via 'Suitable for assisting in the writing of systematic reviews and meta-analyses' rather than an explicit 'Use when...' trigger clause. Per rubric guidelines, a missing explicit trigger guidance caps completeness at 2.

2 / 3

Trigger Term Quality

Includes relevant terms like 'meta-analysis', 'systematic reviews', 'PubMed', 'clinical', and 'paper draft', but misses common user variations such as 'forest plot', 'PRISMA', 'literature review', 'academic writing', or 'research paper'. The term coverage is decent but not comprehensive.

2 / 3

Distinctiveness Conflict Risk

The description occupies a very clear niche: clinical meta-analysis paper drafting with PubMed integration. This is highly specific and unlikely to conflict with general writing, coding, or other document skills.

3 / 3

Total

10

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill provides a comprehensive workflow for meta-analysis manuscript generation with clear stages and useful specifics like word counts, citation formats, and PubMed API usage. However, it suffers from significant verbosity with repeated descriptions across multiple sections (When to Use, Key Features, Implementation Details all echo the same information), and lacks integrated validation/feedback loops during the workflow. The boilerplate sections at the top appear auto-generated and add little value.

Suggestions

Remove the redundant 'When to Use', 'Key Features', and 'Implementation Details' sections which repeat the same information—consolidate into a single brief introduction before the workflow.

Add explicit validation checkpoints within the workflow, such as verifying PubMed API responses return valid results before proceeding to writing, and validating PMID links before final output.

Move detailed API descriptions and search allocation tables to a separate reference file (e.g., references/pubmed-search-guide.md) and link to it from the main skill.

Replace the generic Example Usage section (py_compile, --help) with a concrete end-to-end example showing actual inputs and expected outputs.

DimensionReasoningScore

Conciseness

The skill is extremely verbose with significant redundancy. The 'When to Use' section repeats the description nearly verbatim, 'Key Features' restates the same information again, 'Implementation Details' says 'See Workflow above' then repeats generic guidance, and sections like 'Dependencies' contain placeholder text ('not explicitly version-pinned'). Much content is filler that doesn't add actionable value.

1 / 3

Actionability

The workflow stages provide reasonably concrete guidance with code examples for search_references and insert_references, specific word counts, and structured output format. However, the Example Usage section only shows py_compile and --help commands rather than actual execution, and the scripts themselves are referenced but their actual interfaces and parameters are not fully specified (e.g., what arguments insert_references.py actually takes from CLI).

2 / 3

Workflow Clarity

The five-stage workflow is clearly sequenced and well-structured with logical phases. However, validation checkpoints are mostly absent during the workflow itself—there's no explicit verification after reference retrieval (e.g., checking if PMIDs returned valid results), no validation after section writing, and the quality checklist at the end is a post-hoc list rather than integrated feedback loops. For a workflow involving external API calls and document manipulation, this is a significant gap.

2 / 3

Progressive Disclosure

There is one reference to an external file (references/writing-guide.md) which is good, but the skill itself is monolithic with all workflow details inline. The detailed API descriptions, search allocation tables, and writing module tables could be split into separate reference files. The structure within the file is reasonable with clear headings, but the overall length suggests content should be better distributed.

2 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.