CtrlK
BlogDocsLog inGet started
Tessl Logo

peer-review

Structured manuscript/grant review with checklist-based evaluation. Use when writing formal peer reviews with specific criteria methodology assessment, statistical validity, reporting standards compliance (CONSORT/STROBE), and constructive feedback. Best for actual review writing, manuscript revision. For evaluating claims/evidence quality use scientific-critical-thinking; for quantitative scoring frameworks use scholar-evaluation.

76

1.10x
Quality

67%

Does it follow best practices?

Impact

93%

1.10x

Average score across 3 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/peer-review/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an excellent skill description that clearly defines its scope (structured peer review writing), includes rich domain-specific trigger terms, and explicitly delineates boundaries with related skills. The cross-referencing to scientific-critical-thinking and scholar-evaluation is particularly effective for disambiguation in a multi-skill environment. Uses appropriate third-person voice throughout.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: checklist-based evaluation, methodology assessment, statistical validity checking, reporting standards compliance (CONSORT/STROBE), constructive feedback writing, and manuscript revision. These are concrete, domain-specific activities.

3 / 3

Completeness

Clearly answers both what ('structured manuscript/grant review with checklist-based evaluation') and when ('Use when writing formal peer reviews with specific criteria...Best for actual review writing, manuscript revision'). Also includes explicit boundary guidance distinguishing it from related skills (scientific-critical-thinking, scholar-evaluation).

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'peer review', 'manuscript', 'grant review', 'CONSORT', 'STROBE', 'statistical validity', 'reporting standards', 'manuscript revision'. These cover the natural vocabulary of researchers seeking review assistance.

3 / 3

Distinctiveness Conflict Risk

Explicitly differentiates itself from two related skills ('For evaluating claims/evidence quality use scientific-critical-thinking; for quantitative scoring frameworks use scholar-evaluation'), creating clear boundaries. The focus on formal peer review with specific standards like CONSORT/STROBE carves out a distinct niche.

3 / 3

Total

12

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is comprehensive to a fault—it covers every conceivable aspect of peer review but at enormous token cost, with most content being things Claude already knows (how to evaluate writing quality, what ethical review means, how to be constructive). The 'Visual Enhancement with Scientific Schematics' section is irrelevant promotional content. The presentation review section, while containing some useful concrete guidance (PDF-to-image conversion), is disproportionately long and could be a separate skill or reference file.

Suggestions

Cut content by 60-70%: Remove sections that describe concepts Claude already knows (ethical review basics, writing quality criteria, tone guidance) and focus only on the specific workflow structure, output format, and any non-obvious domain conventions.

Extract the presentation review section into a separate reference file (e.g., references/presentation_review.md) and link to it from the main skill with a one-line summary.

Remove the 'Visual Enhancement with Scientific Schematics' section entirely—it's promotional content for another skill and adds no value to peer review guidance.

Add a concrete example of a completed review output (even abbreviated) showing the expected format for summary statement, major comments, and minor comments, rather than just describing what they should contain.

DimensionReasoningScore

Conciseness

Extremely verbose at ~500+ lines. Massive amounts of content that Claude already knows (what peer review is, how to be constructive, what IRB approval means, basic writing quality criteria). The presentation review section alone is enormous and largely duplicates common knowledge. The 'Visual Enhancement with Scientific Schematics' section is promotional filler unrelated to the core skill.

1 / 3

Actionability

Provides extensive checklists and structured criteria which are somewhat actionable, but most content is checklist items and questions rather than executable guidance. The presentation PDF-to-image conversion is one of the few concrete, executable examples. Most sections describe what to evaluate rather than showing how to write the actual review output.

2 / 3

Workflow Clarity

The 7-stage workflow provides clear sequencing, and the presentation review has an explicit process. However, there are no validation checkpoints between stages—no feedback loops for when issues are found in early stages that affect later ones, and no guidance on when to stop or escalate. The stages are more of a checklist than a true workflow with decision points.

2 / 3

Progressive Disclosure

References two external files (references/reporting_standards.md and references/common_issues.md) which is good, but the main file is a monolithic wall of text that should have much more content split out. The presentation review section alone could be its own reference file. The schematics section adds noise without clear navigation benefit.

2 / 3

Total

7

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (570 lines); consider splitting into references/ and linking

Warning

metadata_version

'metadata.version' is missing

Warning

Total

9

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.