CtrlK
BlogDocsLog inGet started
Tessl Logo

discussion-section-architect

Structures and writes discussion sections for academic papers and research reports. Use when writing a discussion section, interpreting research results, connecting findings to existing literature, addressing study limitations, synthesizing conclusions, or drafting any part of an academic discussion. Helps researchers organize arguments, contextualize data, and produce clear, publication-ready discussion prose.

73

Quality

67%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Academic Writing/discussion-section-architect/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description that clearly defines its scope (academic discussion sections), lists concrete actions, and provides explicit trigger guidance via a well-constructed 'Use when...' clause. It uses third-person voice consistently and includes natural keywords researchers would use. The description is comprehensive without being verbose.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'structures and writes discussion sections', 'interpreting research results', 'connecting findings to existing literature', 'addressing study limitations', 'synthesizing conclusions', 'organize arguments', 'contextualize data', 'produce clear, publication-ready discussion prose'.

3 / 3

Completeness

Clearly answers both 'what' (structures and writes discussion sections, organizes arguments, contextualizes data, produces publication-ready prose) and 'when' with an explicit 'Use when...' clause listing six specific trigger scenarios.

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'discussion section', 'research results', 'findings', 'existing literature', 'study limitations', 'conclusions', 'academic discussion', 'publication-ready'. These cover a good range of terms a researcher would naturally use.

3 / 3

Distinctiveness Conflict Risk

Narrowly scoped to discussion sections of academic papers specifically, which is a clear niche. The triggers are distinct enough to avoid confusion with general writing skills, literature review skills, or methods section skills.

3 / 3

Total

12

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill contains genuinely useful academic discussion writing guidance (example prompts/outputs, the Draft → Revise Loop with checklist, recommended structure) buried under extensive generic boilerplate. The description is repeated verbatim three times, and large sections (Output Requirements, Error Handling, Input Validation, Response Template) are generic templates that don't add domain-specific value. The references to `scripts/main.py` without showing what it actually does create a disconnect between the script-execution framing and the actual writing guidance.

Suggestions

Remove all duplicate content — the skill description appears verbatim in 'When to Use', 'Key Features', and could be stated once in a single sentence.

Delete or drastically compress the generic boilerplate sections (Output Requirements, Error Handling, Input Validation, Response Template) — these describe behaviors Claude already knows and add ~40 lines of noise.

Either show what `scripts/main.py` actually does with concrete code/output, or remove all script-related sections and focus purely on the writing instruction workflow, which is the actual value of this skill.

Consolidate the two competing workflow descriptions into a single clear workflow: the domain-specific Draft → Revise Loop is far more valuable than the generic 5-step 'Workflow' section.

DimensionReasoningScore

Conciseness

The skill is extremely verbose and repetitive. The description is copy-pasted multiple times (in 'When to Use', 'Key Features', and elsewhere). There are large boilerplate sections (Output Requirements, Error Handling, Input Validation, Response Template) that add little value and explain things Claude already knows. The 'Implementation Details' section says 'See Workflow above' then repeats generic guidance.

1 / 3

Actionability

The core academic writing sections (Interpret Results, Connect to Literature, Address Limitations, Synthesize Conclusions) provide useful example inputs/outputs and a concrete checklist. However, the script-based sections are vague — `scripts/main.py` is referenced repeatedly but no concrete code or actual functionality is shown. The skill mixes actionable writing guidance with non-actionable boilerplate about running undefined scripts.

2 / 3

Workflow Clarity

The Draft → Revise Loop is well-structured with a clear checklist and explicit re-check step, which is good. However, the main 'Workflow' section (steps 1-5) is generic boilerplate that could apply to any skill. There are two competing workflow descriptions (the generic 5-step one and the domain-specific Draft → Revise Loop), creating confusion about which to follow.

2 / 3

Progressive Disclosure

References to external files (references/guide.md, references/examples/, references/audit-reference.md) are present and one level deep. However, the main file itself is monolithic with significant content that could be separated (e.g., the boilerplate sections). The structure mixes domain-specific writing guidance with generic execution boilerplate, making navigation harder.

2 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.