CtrlK
BlogDocsLog inGet started
Tessl Logo

scientific-writing

Core skill for the deep research and writing tool. Write scientific manuscripts in full paragraphs (never bullet points). Use two-stage process with (1) section outlines with key points using research-lookup then (2) convert to flowing prose. IMRAD structure, citations (APA/AMA/Vancouver), figures/tables, reporting guidelines (CONSORT/STROBE/PRISMA), for research papers and journal submissions.

70

1.78x
Quality

58%

Does it follow best practices?

Impact

93%

1.78x

Average score across 3 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/scientific-writing/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

82%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong description with excellent specificity and domain-specific trigger terms that clearly carve out a niche for scientific manuscript writing. Its main weakness is the lack of an explicit 'Use when...' clause, which would help Claude decisively select this skill. The description also correctly uses third-person voice throughout.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user asks to write, draft, or revise a research paper, scientific manuscript, or journal article, or mentions IMRAD, reporting guidelines, or academic citations.'

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: writing scientific manuscripts, two-stage outline-to-prose process, IMRAD structure, citations in specific formats (APA/AMA/Vancouver), figures/tables, reporting guidelines (CONSORT/STROBE/PRISMA). Very detailed about what it does.

3 / 3

Completeness

The 'what' is thoroughly covered with specific actions and formats. However, there is no explicit 'Use when...' clause or equivalent trigger guidance. The description implies when to use it (research papers, journal submissions) but does not explicitly state when Claude should select this skill, which caps this at 2 per the rubric.

2 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'scientific manuscripts', 'research papers', 'journal submissions', 'IMRAD', 'citations', 'APA', 'AMA', 'Vancouver', 'CONSORT', 'STROBE', 'PRISMA', 'figures/tables'. These cover many natural terms a researcher would use.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with a clear niche: scientific manuscript writing with specific citation formats, reporting guidelines, and IMRAD structure. Unlikely to conflict with general writing or document skills due to the domain-specific terminology and narrow focus.

3 / 3

Total

11

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is comprehensive in scope but severely undermined by verbosity. It explains many concepts Claude already knows (basic writing principles, what different chart types are for, field-specific conventions), inflating the token cost dramatically. The two-stage writing process and LaTeX formatting examples are genuinely valuable, but the skill would benefit enormously from moving detailed content into reference files and keeping the main skill lean and directive.

Suggestions

Move the entire field-specific terminology section (Section 10) to a reference file—Claude already knows most of these conventions and can look them up when needed.

Move detailed LaTeX examples and the figure generation guidance to their respective reference files, keeping only a brief summary with links in the main skill.

Remove explanations of concepts Claude already knows (what bar graphs show, what active voice is, what SI units are) and replace with only the project-specific conventions or preferences.

Add explicit validation checkpoints to the manuscript workflow (e.g., 'Verify all citations resolve to real papers,' 'Run reporting guideline checklist before finalizing,' 'Confirm word counts meet journal limits before proceeding to Stage 4').

DimensionReasoningScore

Conciseness

Extremely verbose at ~500+ lines. Extensively explains concepts Claude already knows (what IMRAD is, what bar graphs are for, basic writing principles like 'use precise language'). The field-specific terminology section alone is massive and largely teaches Claude things it already knows (gene nomenclature conventions, SI units, person-first language). Enormous amounts of padding throughout.

1 / 3

Actionability

Provides some concrete examples (the outline-to-prose conversion, LaTeX code snippets, bash commands for figure generation), but much of the content is descriptive rather than instructive. Many sections read like textbook explanations rather than executable guidance. The two-stage writing process example is genuinely useful, but most other sections are advisory rather than actionable.

2 / 3

Workflow Clarity

The manuscript development workflow (Stages 1-4) provides a clear sequence, and the two-stage writing process is well-articulated. However, there are no validation checkpoints or feedback loops—no steps like 'verify citations are valid before proceeding' or 'check reporting guideline compliance before finalizing.' For a skill involving complex multi-step document production, the absence of explicit verification steps is a gap.

2 / 3

Progressive Disclosure

References to external files are well-signaled (references/imrad_structure.md, references/citation_styles.md, etc.), which is good. However, the SKILL.md itself is monolithic—it inlines enormous amounts of content that should live in those reference files (the entire field-specific terminology section, detailed LaTeX formatting examples, extensive figure generation guidance). The main file should be a concise overview pointing to these details, not containing them.

2 / 3

Total

7

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (718 lines); consider splitting into references/ and linking

Warning

metadata_version

'metadata.version' is missing

Warning

Total

9

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.