Use study limitations drafter for academic writing workflows that need structured execution, explicit assumptions, and clear output boundaries.
49
37%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/Academic Writing/study-limitations-drafter/SKILL.mdQuality
Discovery
40%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description names its domain (study limitations drafting for academic writing) but fails to specify concrete actions it performs, relying instead on abstract buzzwords like 'structured execution' and 'clear output boundaries'. It partially addresses when to use it but lacks explicit trigger guidance and natural user language variations.
Suggestions
Replace abstract phrases like 'structured execution, explicit assumptions, and clear output boundaries' with concrete actions such as 'Drafts study limitations sections by identifying methodological weaknesses, sample constraints, and generalizability issues'.
Add an explicit 'Use when...' clause with natural trigger terms like 'limitations section', 'research limitations', 'manuscript weaknesses', 'paper limitations', or 'thesis limitations'.
Include specific output details such as what format the limitations are drafted in (e.g., paragraph form, bullet points) and what inputs are expected (e.g., study design, methodology description).
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses vague language like 'structured execution', 'explicit assumptions', and 'clear output boundaries' without listing any concrete actions. It does not specify what the skill actually does beyond naming itself as a 'study limitations drafter'. | 1 / 3 |
Completeness | The 'what' is weakly implied (drafts study limitations) and the 'when' is partially addressed with 'academic writing workflows that need structured execution', but neither is explicit or detailed. The 'Use when' equivalent exists but is vague about actual triggers. | 2 / 3 |
Trigger Term Quality | It includes some relevant terms like 'study limitations', 'academic writing', and 'drafter', which a user might mention. However, it misses common variations like 'research limitations', 'manuscript', 'paper limitations section', or 'thesis'. | 2 / 3 |
Distinctiveness Conflict Risk | The mention of 'study limitations' provides some niche specificity, but the broad framing around 'academic writing workflows' could overlap with other academic writing skills. The abstract qualifiers ('structured execution', 'explicit assumptions') don't help distinguish it. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is heavily padded with boilerplate sections (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria) that consume tokens without teaching Claude how to draft study limitations. The core academic writing guidance is thin—there's one example sentence and a parameters table, but no concrete input/output examples showing how to transform a list of limitations into well-written paragraphs. The repetitive self-references and generic workflow steps further dilute the actionable content.
Suggestions
Remove boilerplate sections (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria, Input Validation) that don't teach Claude how to draft limitation statements—these consume ~40% of tokens with near-zero instructional value.
Add 2-3 concrete input/output examples showing a list of limitations + severity → finished limitation paragraphs with mitigation strategies, so Claude knows exactly what good output looks like.
Consolidate the duplicated workflow descriptions (Example Usage run plan, Workflow section, Implementation Details) into a single clear sequence with academic-writing-specific validation steps (e.g., check tone balance, verify each limitation has a corresponding mitigation).
Remove circular cross-references ('See ## Prerequisites above', 'See ## Workflow above') and the repeated skill description that appears verbatim in three places.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose and repetitive. Multiple sections reference each other circularly ('See ## Prerequisites above', 'See ## Workflow above'). The description is repeated verbatim in multiple places. Boilerplate sections like Risk Assessment, Security Checklist, Lifecycle Status, and Evaluation Criteria add significant token cost with minimal actionable value for Claude. Many sections explain things Claude already knows (error handling principles, input validation concepts). | 1 / 3 |
Actionability | The Parameters table and example usage commands are somewhat concrete, but the core task—generating limitation paragraphs—lacks executable examples showing input-to-output mapping. The single example line ('While the single-center design limits generalizability...') is too brief. The workflow steps are generic process descriptions rather than specific instructions for drafting limitation text. | 2 / 3 |
Workflow Clarity | The Workflow section provides a numbered sequence and the Error Handling section covers failure modes, but the steps are generic and could apply to almost any skill. There are no validation checkpoints specific to academic writing quality (e.g., checking tone, verifying limitation-mitigation pairing). The 'Example run plan' duplicates the workflow without adding specificity. | 2 / 3 |
Progressive Disclosure | There is a reference to references/audit-reference.md which is appropriate, but the main file itself is a monolithic wall of text with many sections that could be consolidated or removed. Circular cross-references ('See ## Prerequisites above', 'See ## Workflow above') add confusion rather than navigation clarity. The content that is inline is bloated rather than appropriately split. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
0b96148
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.