CtrlK
BlogDocsLog inGet started
Tessl Logo

scientific-writing

Core skill for the deep research and writing tool. Write scientific manuscripts in full paragraphs (never bullet points). Use two-stage process with (1) section outlines with key points using research-lookup then (2) convert to flowing prose. IMRAD structure, citations (APA/AMA/Vancouver), figures/tables, reporting guidelines (CONSORT/STROBE/PRISMA), for research papers and journal submissions.

70

1.78x
Quality

58%

Does it follow best practices?

Impact

93%

1.78x

Average score across 3 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/scientific-writing/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

82%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong description with excellent specificity and domain-specific trigger terms that clearly carve out a scientific manuscript writing niche. Its main weakness is the lack of an explicit 'Use when...' clause, which would help Claude know precisely when to select this skill over others. The description is well-structured and uses appropriate third-person voice.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user asks to write, draft, or revise a scientific manuscript, research paper, or journal article, or mentions IMRAD, reporting guidelines, or academic citations.'

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: writing scientific manuscripts, two-stage outline-to-prose process, IMRAD structure, citations in specific formats (APA/AMA/Vancouver), figures/tables, reporting guidelines (CONSORT/STROBE/PRISMA). Very detailed about what it does.

3 / 3

Completeness

The 'what' is thoroughly covered with specific actions and formats. However, there is no explicit 'Use when...' clause or equivalent trigger guidance. The description implies when to use it (research papers, journal submissions) but does not explicitly state when Claude should select this skill, which caps this at 2 per the rubric guidelines.

2 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'scientific manuscripts', 'research papers', 'journal submissions', 'IMRAD', 'citations', 'APA', 'AMA', 'Vancouver', 'CONSORT', 'STROBE', 'PRISMA', 'figures/tables'. These cover many natural terms a researcher would use.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with a clear niche: scientific manuscript writing with specific citation formats, reporting guidelines, and IMRAD structure. Unlikely to conflict with general writing or document skills due to the very specific academic/scientific domain markers.

3 / 3

Total

11

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is comprehensive in scope but severely undermined by verbosity—it reads more like a textbook chapter than a concise skill file. Large sections (field-specific terminology, writing principles, figure type descriptions) explain things Claude already knows and should be either removed or relegated to reference files. The two-stage writing process and LaTeX formatting sections provide genuine value, but they're buried in excessive content that dilutes the skill's effectiveness.

Suggestions

Cut the skill content by 60-70%: Remove the entire field-specific terminology section (Section 10), the writing principles section (Section 6), the common figure types list, and the 'when to use tables vs figures' guidance—Claude already knows all of this. Keep only project-specific conventions and tool-specific commands.

Move the LaTeX scientific_report.sty documentation (Section 8) entirely to a reference file, keeping only a 3-line summary with a pointer in the main skill.

Add explicit validation checkpoints to the manuscript workflow: e.g., 'Verify all citations resolve to real papers using research-lookup,' 'Run reporting guideline checklist before finalizing Methods,' 'Compile LaTeX and verify no errors before proceeding.'

Restructure as a true overview: Keep the two-stage writing process, the mandatory figure generation rules, the workflow stages, and the reference file pointers. Everything else should live in the already-referenced reference files.

DimensionReasoningScore

Conciseness

Extremely verbose at ~500+ lines. Extensively explains concepts Claude already knows (what IMRAD is, what bar graphs are for, basic writing principles like 'use precise language'). The field-specific terminology section alone is massive and largely teaches Claude things it already knows (gene nomenclature conventions, SI units, person-first language). Enormous amounts of content could be cut or moved to reference files.

1 / 3

Actionability

The two-stage writing process with outline-to-prose conversion includes a concrete example, and the LaTeX commands/environments are executable. However, much of the skill is descriptive rather than instructive (e.g., listing what each IMRAD section should contain, enumerating field-specific terminology conventions). The generate_schematic.py commands are concrete but the bulk of guidance is abstract advice rather than executable steps.

2 / 3

Workflow Clarity

The manuscript development workflow (Stages 1-4) provides a clear sequence, and the two-stage writing process is well-articulated. However, there are no validation checkpoints or feedback loops—no steps like 'verify citations are valid,' 'run a checklist against reporting guidelines,' or 'validate LaTeX compilation succeeds before proceeding.' For a skill involving complex multi-step document production, this is a significant gap.

2 / 3

Progressive Disclosure

References to external files are well-signaled (references/imrad_structure.md, references/citation_styles.md, etc.) and appear to be one level deep. However, the SKILL.md itself is monolithic—enormous sections on field-specific terminology, figure requirements tables, LaTeX formatting details, and reporting guidelines are inlined when they should clearly be in the referenced files. The skill tries to be both overview and comprehensive reference simultaneously.

2 / 3

Total

7

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (718 lines); consider splitting into references/ and linking

Warning

metadata_version

'metadata.version' is missing

Warning

Total

9

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.