CtrlK
BlogDocsLog inGet started
Tessl Logo

study-limitations-drafter

Use study limitations drafter for academic writing workflows that need structured execution, explicit assumptions, and clear output boundaries.

42

Quality

28%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Academic Writing/study-limitations-drafter/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

22%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description fails to communicate what the skill actually does, relying on abstract meta-language ('structured execution', 'explicit assumptions', 'clear output boundaries') instead of concrete actions. While it attempts to specify when to use it, the lack of specific capabilities makes it difficult for Claude to understand what this skill produces or how it helps with study limitations.

Suggestions

Replace abstract language with concrete actions: e.g., 'Drafts study limitations sections by identifying methodological constraints, sample size issues, and generalizability concerns'

Add specific trigger terms users would naturally say: 'research limitations', 'thesis limitations section', 'methodology weaknesses', 'paper constraints'

Restructure to clearly separate what it does from when to use it: 'Generates structured limitations sections for academic papers. Use when writing dissertations, journal articles, or research reports that need limitations discussion.'

DimensionReasoningScore

Specificity

The description uses vague language like 'structured execution', 'explicit assumptions', and 'clear output boundaries' without describing any concrete actions. It doesn't specify what the skill actually does (e.g., 'drafts limitation sections', 'identifies methodological weaknesses').

1 / 3

Completeness

The description only provides a 'when' clause but fails to explain 'what' the skill actually does. There's no explanation of the concrete outputs or actions - only abstract qualities like 'structured execution' and 'clear output boundaries'.

1 / 3

Trigger Term Quality

Contains some relevant keywords like 'study limitations', 'academic writing', and 'drafter', but misses common variations users might say such as 'research limitations', 'thesis', 'dissertation', 'methodology section', or 'paper weaknesses'.

2 / 3

Distinctiveness Conflict Risk

The term 'study limitations drafter' provides some specificity to academic writing, but 'academic writing workflows' is broad enough to potentially conflict with other academic writing skills. The abstract qualifiers don't help distinguish it.

2 / 3

Total

6

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill suffers from severe template bloat, containing extensive generic boilerplate (security checklists, risk assessments, lifecycle status) that overwhelms the actual skill-specific content. The core functionality—generating study limitation statements—is poorly demonstrated with no executable example showing input-to-output transformation. The document prioritizes process documentation over actionable guidance.

Suggestions

Remove or drastically reduce boilerplate sections (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria) that don't add skill-specific value

Add a complete executable example showing: input limitations list → running the script → actual output paragraph(s)

Fix broken cross-references ('See ## Prerequisites above' when Prerequisites appears below) and consolidate redundant sections

Replace generic workflow steps with specific validation for limitation statement quality (e.g., tone check, completeness verification)

DimensionReasoningScore

Conciseness

Extremely verbose with excessive boilerplate, redundant sections (e.g., 'See ## Prerequisites above' when Prerequisites is below), and template-like content that adds no value. The actual skill-specific content (parameters, example output) is buried under generic workflow descriptions Claude already knows.

1 / 3

Actionability

Provides some concrete elements like bash commands and a parameters table, but the core functionality lacks executable code examples. The 'Example' section shows only a fragment of output ('While the single-center design...') without showing how to actually generate it with the script.

2 / 3

Workflow Clarity

Steps are listed in the Workflow section with a logical sequence, but validation checkpoints are generic ('validate that the request matches documented scope') rather than specific. No concrete validation commands for the actual limitation drafting output quality.

2 / 3

Progressive Disclosure

References external files (references/audit-reference.md, scripts/main.py) appropriately, but the main document is bloated with sections that should be condensed or removed. The structure exists but content organization is poor with redundant cross-references to sections that don't exist yet ('See ## Prerequisites above').

2 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.