Generates the "Results" section for meta-analysis sensitivity analysis based on statistical tables and titles. Use when the user wants to describe sensitivity analysis results or format sensitivity tables for a meta-analysis paper.
55
44%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/Academic Writing/meta-results-sensitivity-analysis/SKILL.mdQuality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-constructed description for a narrow, specialized skill. It clearly communicates both what the skill does and when to use it, with domain-specific trigger terms that are natural for the target audience. The main weakness is that the capability description could be slightly more detailed about the specific actions performed (e.g., interpreting statistical outputs, generating narrative text, formatting tables).
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | It names the domain (meta-analysis sensitivity analysis) and a specific action (generates the 'Results' section), but doesn't list multiple concrete actions beyond generating and formatting. It's more specific than vague but not comprehensively listing capabilities. | 2 / 3 |
Completeness | Clearly answers both 'what' (generates the Results section for meta-analysis sensitivity analysis based on statistical tables and titles) and 'when' (explicit 'Use when' clause specifying describing sensitivity analysis results or formatting sensitivity tables for a meta-analysis paper). | 3 / 3 |
Trigger Term Quality | Includes strong natural keywords users would say: 'sensitivity analysis', 'meta-analysis', 'Results section', 'sensitivity tables', 'meta-analysis paper'. These are terms a researcher would naturally use when requesting this task. | 3 / 3 |
Distinctiveness Conflict Risk | Highly specific niche — meta-analysis sensitivity analysis results writing is a very narrow domain. Unlikely to conflict with other skills given the specificity of 'sensitivity analysis', 'meta-analysis paper', and 'Results section'. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
0%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is heavily padded with generic boilerplate that adds no value (Failure Handling, Deterministic Output Rules, Completion Checklist, Safety Rules are all things Claude already knows). The actual domain-specific content about sensitivity analysis formatting is minimal and buried, with the only code example being entirely commented out. The document appears to be auto-generated from a template, resulting in contradictory sections, circular references, and very low signal-to-noise ratio.
Suggestions
Remove all generic boilerplate sections (Failure Handling, Deterministic Output Rules, Completion Checklist, Validation and Safety Rules) that describe behaviors Claude already knows, and focus on the domain-specific sensitivity analysis content.
Provide a complete, executable code example showing the actual format_sensitivity_result function call with real input data and expected output, instead of commented-out pseudocode.
Consolidate the contradictory workflow descriptions into a single clear sequence: specify exactly what inputs are needed, what the LLM prompt should contain, and how the formatting script transforms the output, with a concrete before/after example.
Include a concrete example of a sensitivity analysis table input and the expected formatted 'Results' section output, so Claude can see the exact transformation being requested.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose and repetitive. Contains massive amounts of boilerplate (Failure Handling, Deterministic Output Rules, Completion Checklist, Validation and Safety Rules) that Claude already knows. The 'Key Features' section restates the description verbatim. Multiple sections reference each other circularly ('See ## Usage above', 'See ## Workflow above'). The actual domain-specific content (sensitivity analysis formatting) is buried under generic scaffolding. | 1 / 3 |
Actionability | The code example is entirely commented out, making it non-executable. The workflow references `scripts/validate_skill.py` and `scripts/format_result.py` but provides no actual executable code or concrete commands that would produce results. The 'Example run plan' is generic and not specific to sensitivity analysis. The skill describes what should happen rather than providing concrete, copy-paste-ready instructions. | 1 / 3 |
Workflow Clarity | The workflow is confusingly split across multiple contradictory sections. The actual 2-step workflow (Generate Description → Format Output) is buried deep in the document. Earlier sections present a different 4-step 'Example run plan' referencing validate_skill.py. There are no validation checkpoints between the LLM generation and formatting steps, and the Quick Validation section contradicts the earlier 'Validation Shortcut' section. | 1 / 3 |
Progressive Disclosure | Monolithic wall of text with no references to external files for detailed content. The document contains redundant sections (two 'When to Use'/'When Not to Use' pairs, two validation sections, duplicated input parameter descriptions). Content is poorly organized with generic boilerplate sections dominating over the actual skill-specific guidance. | 1 / 3 |
Total | 4 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
8277276
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.