Structures and writes discussion sections for academic papers and research reports. Use when writing a discussion section, interpreting research results, connecting findings to existing literature, addressing study limitations, synthesizing conclusions, or drafting any part of an academic discussion. Helps researchers organize arguments, contextualize data, and produce clear, publication-ready discussion prose.
70
63%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/Academic Writing/discussion-section-architect/SKILL.mdQuality
Discovery
92%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that clearly articulates specific capabilities, includes natural trigger terms researchers would use, and explicitly states both what the skill does and when to use it. The only minor weakness is potential overlap with other academic writing skills, though the focus on discussion sections specifically helps mitigate this. The description uses proper third-person voice throughout.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'structures and writes discussion sections', 'interpreting research results', 'connecting findings to existing literature', 'addressing study limitations', 'synthesizing conclusions', 'organize arguments', 'contextualize data', 'produce clear, publication-ready discussion prose'. | 3 / 3 |
Completeness | Clearly answers both 'what' (structures and writes discussion sections, organizes arguments, contextualizes data, produces publication-ready prose) and 'when' with an explicit 'Use when...' clause listing six specific trigger scenarios. | 3 / 3 |
Trigger Term Quality | Includes strong natural keywords a user would say: 'discussion section', 'research results', 'findings', 'existing literature', 'study limitations', 'conclusions', 'academic discussion', 'publication-ready'. These cover a good range of terms researchers would naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | While it narrows to 'discussion sections' specifically, there could be overlap with broader academic writing skills, literature review skills, or general research paper writing skills. The focus on 'discussion' is somewhat distinctive but terms like 'synthesizing conclusions' and 'connecting findings to literature' could trigger for adjacent academic writing tasks. | 2 / 3 |
Total | 11 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill suffers from severe bloat caused by generic boilerplate sections that appear templated and not tailored to the actual task of writing discussion sections. The core academic writing guidance (interpretation, literature connection, limitations, Draft→Revise loop) is genuinely useful and moderately actionable, but it's buried under repetitive scaffolding and references to a `scripts/main.py` whose purpose is never clarified. Removing the generic wrapper and focusing on the domain-specific content would dramatically improve this skill.
Suggestions
Remove or drastically reduce the generic boilerplate sections (When to Use, Key Features, Implementation Details, Input Validation, Error Handling, Response Template, Audit-Ready Commands) that repeat the description or reference scripts/main.py without explaining what it does — these consume tokens without adding value.
Consolidate the two competing workflow descriptions (the generic 5-step 'Workflow' and the domain-specific 'Draft → Revise Loop') into a single clear workflow focused on the actual task of writing discussion sections.
Eliminate the repeated copy-pasting of the skill description in 'When to Use' and 'Key Features' sections — state the purpose once concisely.
Either explain what `scripts/main.py` actually does for discussion section writing or remove all references to it if the skill is primarily prompt-based guidance.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is extremely verbose and repetitive. The description is copy-pasted multiple times (in 'When to Use', 'Key Features', etc.). There are large boilerplate sections (Error Handling, Input Validation, Output Requirements, Response Template) that add little value for this specific skill. The generic scaffolding (audit commands, implementation details referencing scripts/main.py) bloats the content significantly without adding actionable guidance for writing discussion sections. | 1 / 3 |
Actionability | The core academic writing sections (Interpret Results, Connect to Literature, Address Limitations, Synthesize Conclusions) provide useful example inputs/outputs and a concrete checklist. However, much of the skill references a `scripts/main.py` that is never explained in terms of what it actually does, making those sections non-actionable. The discussion-specific guidance is moderately concrete but relies on templates rather than fully executable examples. | 2 / 3 |
Workflow Clarity | The Draft → Revise Loop is a well-structured workflow with a checklist and explicit re-check step, which is good. However, the generic 'Workflow' section (steps 1-5) is vague and duplicative. The relationship between the script-based workflow and the writing workflow is unclear. The validation checkpoint in the Draft → Revise loop partially compensates, but the overall workflow is muddled by competing process descriptions. | 2 / 3 |
Progressive Disclosure | References to `references/guide.md`, `references/examples/`, and `references/audit-reference.md` are present and clearly signaled. However, the SKILL.md itself is monolithic with substantial inline content that could be split out, and there is significant redundancy between sections (two 'References' sections, repeated scope descriptions). The structure exists but is poorly organized. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
0b96148
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.