Generates a meta-analysis baseline characteristics section (text + table) from raw data. Supports Chinese and English. Use when the user provides baseline data and wants a formatted results section.
67
60%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/Academic Writing/meta-baseline-generator/SKILL.mdQuality
Discovery
85%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-crafted description that clearly defines a narrow, specific task (meta-analysis baseline characteristics generation) with explicit trigger guidance. Its main weakness is that trigger terms could be expanded to include more natural variations that researchers might use when requesting this type of output, such as 'Table 1', 'study demographics', or 'systematic review'.
Suggestions
Add more natural trigger term variations users might say, such as 'Table 1', 'patient demographics', 'study characteristics', 'participant characteristics', or 'systematic review baseline'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: generates a meta-analysis baseline characteristics section, produces both text and table, supports Chinese and English, works from raw data to formatted results section. | 3 / 3 |
Completeness | Clearly answers both what ('Generates a meta-analysis baseline characteristics section (text + table) from raw data. Supports Chinese and English.') and when ('Use when the user provides baseline data and wants a formatted results section.'). | 3 / 3 |
Trigger Term Quality | Includes relevant terms like 'meta-analysis', 'baseline characteristics', 'baseline data', and 'results section', but misses common variations users might say such as 'study characteristics', 'patient demographics', 'Table 1', 'systematic review', or 'participant characteristics'. | 2 / 3 |
Distinctiveness Conflict Risk | Very specific niche combining meta-analysis, baseline characteristics, and bilingual support. Unlikely to conflict with other skills due to the narrow domain of academic meta-analysis baseline reporting. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill suffers from significant verbosity and redundancy — the description is repeated across multiple sections, and generic boilerplate (likely auto-generated) dilutes the useful content. The core workflow in steps 1-5 is the strongest part, providing a reasonable sequence with a concrete code example, but it lacks validation checkpoints. The skill would benefit greatly from removing redundant sections and adding error handling guidance.
Suggestions
Remove or consolidate the redundant 'When to Use', 'Key Features', and 'Implementation Details' sections — they repeat the same information. Keep only the Workflow, Rules, and Testing sections.
Add validation checkpoints to the workflow, e.g., verify LLM output format before passing to the script, check that the script output contains the expected citation placement, and handle cases where the LLM doesn't produce valid markdown tables.
Replace the Example Usage section (which only shows py_compile and --help) with a concrete end-to-end example showing sample input data and expected output.
Fix the broken reference 'See ## Workflow above for related details' in Implementation Details — either remove the section entirely or integrate its unique content into the Workflow section.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is highly verbose and repetitive. The 'When to Use' section restates the description multiple times, 'Key Features' repeats the description verbatim, 'Implementation Details' references a non-existent '## Workflow above' and restates generic principles Claude already knows. Many sections add no new information. | 1 / 3 |
Actionability | The Workflow section provides a reasonably concrete multi-step process with a Python code snippet for step 4, and the Rules section gives specific formatting requirements. However, much of the guidance is vague (e.g., 'Use the prompt in references/prompts.md'), the Example Usage section only shows compilation/help commands rather than actual usage, and the script's actual behavior is only partially described. | 2 / 3 |
Workflow Clarity | The workflow has a clear 5-step sequence with inputs and outputs specified for each step, which is good. However, there are no validation checkpoints or error recovery steps — no feedback loop for when the LLM output is malformed, when the script fails, or when the citation insertion doesn't work correctly. For a multi-step process involving LLM generation and script processing, this is a notable gap. | 2 / 3 |
Progressive Disclosure | The skill references external files (references/prompts.md, scripts/text_processor.py) appropriately, but the main file itself is poorly organized with redundant sections ('When to Use', 'Key Features', 'Implementation Details' all overlap significantly). The reference to 'See ## Workflow above' in Implementation Details is broken/confusing. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
8277276
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.