CtrlK
BlogDocsLog inGet started
Tessl Logo

peer-review

Structured manuscript/grant review with checklist-based evaluation. Use when writing formal peer reviews with specific criteria methodology assessment, statistical validity, reporting standards compliance (CONSORT/STROBE), and constructive feedback. Best for actual review writing, manuscript revision. For evaluating claims/evidence quality use scientific-critical-thinking; for quantitative scoring frameworks use scholar-evaluation.

76

1.10x
Quality

67%

Does it follow best practices?

Impact

93%

1.10x

Average score across 3 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/peer-review/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an excellent skill description that clearly defines its scope (structured peer review writing), includes rich domain-specific trigger terms, and explicitly delineates boundaries with related skills. The cross-referencing to scientific-critical-thinking and scholar-evaluation is particularly effective for disambiguation in a multi-skill environment. Uses appropriate third-person voice throughout.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: checklist-based evaluation, methodology assessment, statistical validity checking, reporting standards compliance (CONSORT/STROBE), constructive feedback writing, and manuscript revision. These are concrete, domain-specific activities.

3 / 3

Completeness

Clearly answers both what ('structured manuscript/grant review with checklist-based evaluation') and when ('Use when writing formal peer reviews with specific criteria...Best for actual review writing, manuscript revision'). Also includes explicit boundary guidance distinguishing it from related skills (scientific-critical-thinking, scholar-evaluation).

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'peer review', 'manuscript', 'grant review', 'CONSORT', 'STROBE', 'statistical validity', 'reporting standards', 'manuscript revision'. These cover the natural vocabulary of researchers seeking review assistance.

3 / 3

Distinctiveness Conflict Risk

Explicitly differentiates itself from two related skills ('For evaluating claims/evidence quality use scientific-critical-thinking; for quantitative scoring frameworks use scholar-evaluation'), creating clear boundaries. The focus on formal peer review with specific standards like CONSORT/STROBE carves out a distinct niche.

3 / 3

Total

12

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is comprehensive in coverage but severely over-engineered for its purpose. It reads like a textbook chapter on peer review rather than a concise skill file, explaining many concepts Claude already understands (ethical review basics, writing quality fundamentals, what a good abstract looks like). The presentation review section is disproportionately large and could be its own skill. The lack of concrete review examples (showing actual review text for a sample manuscript) limits actionability despite the thorough checklists.

Suggestions

Cut content by 60-70%: Remove explanations of concepts Claude already knows (what IRB is, what good grammar means, how to be respectful in reviews) and keep only the structural framework, decision criteria, and output format.

Add 1-2 concrete worked examples showing actual review input (brief manuscript excerpt) and expected review output (formatted review text), so Claude can pattern-match on tone, specificity, and structure.

Extract the presentation review section into a separate reference file (e.g., references/presentation_review.md) and the section-by-section checklist into another (e.g., references/section_checklist.md), keeping only summaries and links in the main SKILL.md.

Add explicit decision points in the workflow (e.g., 'If Stage 1 reveals fundamental flaws, skip to Stage 6 ethical check and recommend reject with brief justification') to create actual feedback loops rather than a linear checklist.

DimensionReasoningScore

Conciseness

Extremely verbose at ~500+ lines. Massive amounts of content Claude already knows (what peer review is, how to be constructive, what IRB approval means, basic writing quality criteria). The presentation review section alone is enormous and largely duplicates common knowledge. The 'Visual Enhancement with Scientific Schematics' section is promotional filler for another skill. Checklist items like 'Is grammar and spelling correct?' waste tokens on things Claude inherently knows.

1 / 3

Actionability

The skill provides structured checklists and a clear report format (summary, major comments, minor comments), which is somewhat actionable. However, most content is descriptive rather than executable—it lists what to check but provides no concrete examples of actual review text, no example input/output of a review, and no templates with filled-in examples. The presentation section has some concrete commands (pdf_to_images.py) but the core peer review content lacks worked examples.

2 / 3

Workflow Clarity

The 7-stage workflow is clearly sequenced and logically ordered, which is good. However, there are no validation checkpoints or feedback loops between stages—no guidance on when to stop or revisit earlier stages based on findings. The presentation review workflow has better sequencing with explicit steps, but the main peer review workflow reads more like a reference document than an operational procedure with decision points.

2 / 3

Progressive Disclosure

References to 'references/reporting_standards.md' and 'references/common_issues.md' suggest some content splitting, but no bundle files are provided to verify these exist. The main SKILL.md is monolithic—the enormous presentation review section and the detailed section-by-section checklists could easily be split into separate reference files. The scientific-schematics cross-reference is appropriate but the inline content is far too heavy for an overview document.

2 / 3

Total

7

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (570 lines); consider splitting into references/ and linking

Warning

metadata_version

'metadata.version' is missing

Warning

Total

9

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.