CtrlK
BlogDocsLog inGet started
Tessl Logo

peer-review

Structured manuscript/grant review with checklist-based evaluation. Use when writing formal peer reviews with specific criteria methodology assessment, statistical validity, reporting standards compliance (CONSORT/STROBE), and constructive feedback. Best for actual review writing, manuscript revision. For evaluating claims/evidence quality use scientific-critical-thinking; for quantitative scoring frameworks use scholar-evaluation.

76

1.10x
Quality

67%

Does it follow best practices?

Impact

93%

1.10x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/peer-review/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an excellent skill description that clearly defines its scope (structured peer review writing), includes rich natural trigger terms from the academic review domain, and explicitly delineates boundaries with related skills. The inclusion of specific reporting standards (CONSORT/STROBE) and the disambiguation from adjacent skills are particularly strong features. Uses appropriate third-person voice throughout.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: checklist-based evaluation, methodology assessment, statistical validity checking, reporting standards compliance (CONSORT/STROBE), constructive feedback writing, and manuscript revision. These are concrete, domain-specific capabilities.

3 / 3

Completeness

Clearly answers both what ('structured manuscript/grant review with checklist-based evaluation') and when ('Use when writing formal peer reviews with specific criteria...Best for actual review writing, manuscript revision'). Also includes explicit boundary guidance distinguishing it from related skills (scientific-critical-thinking, scholar-evaluation).

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'peer review', 'manuscript', 'grant review', 'CONSORT', 'STROBE', 'statistical validity', 'reporting standards', 'manuscript revision'. These cover the natural vocabulary of researchers seeking review assistance.

3 / 3

Distinctiveness Conflict Risk

Explicitly differentiates itself from two related skills (scientific-critical-thinking for claims/evidence quality, scholar-evaluation for quantitative scoring frameworks), creating clear boundaries. The niche of formal peer review writing with checklist-based evaluation and reporting standards is highly specific and unlikely to conflict.

3 / 3

Total

12

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is comprehensive but severely over-long, explaining many concepts Claude already understands (peer review etiquette, basic scientific methodology, what IRB approval is). The presentation review section consumes a disproportionate amount of space and should be a separate referenced file. While the structured workflow and checklists provide some actionability, the skill would benefit greatly from concrete examples of actual review output and significant trimming of obvious content.

Suggestions

Reduce content by 60-70%: Remove explanations of concepts Claude already knows (what peer review is, what IRB approval means, basic writing quality criteria, tone advice like 'be respectful'). Focus only on the specific workflow, decision criteria, and output format.

Add a concrete example of a completed review: Show a sample summary statement, 2-3 major comments, and 1-2 minor comments with the exact format and tone expected, so Claude can pattern-match.

Move the presentation review section to a separate referenced file (e.g., references/presentation_review.md) since it's a distinct sub-workflow that doubles the skill's length.

Remove the 'Visual Enhancement with Scientific Schematics' section entirely—it's promotional content for another skill and not relevant to the core peer review task.

DimensionReasoningScore

Conciseness

Extremely verbose at ~500+ lines. Massive amounts of content that Claude already knows (what peer review is, how to be constructive, what IRB approval means, basic writing quality criteria). The presentation review section alone is enormous and largely duplicates common knowledge. Checklist items like 'Is grammar and spelling correct?' waste tokens.

1 / 3

Actionability

Provides structured checklists and a clear review report format, which is somewhat actionable. However, most content is descriptive rather than executable—it lists what to check but provides no concrete examples of actual review text, no templates with filled-in examples, and no sample review output. The presentation conversion script is a concrete command, but most guidance remains abstract.

2 / 3

Workflow Clarity

The 7-stage workflow is clearly sequenced and logically ordered, which is good. However, there are no validation checkpoints or feedback loops between stages—no guidance on when to stop or revisit earlier stages based on findings. The presentation review section has a clearer workflow with explicit steps, but the main review workflow lacks error recovery or decision points.

2 / 3

Progressive Disclosure

References two external files (references/reporting_standards.md and references/common_issues.md), which is good progressive disclosure. However, the SKILL.md itself is monolithic—the massive presentation review section, the detailed section-by-section checklists, and the reporting standards content should be split into separate reference files rather than inlined. The presentation-specific content alone could be its own referenced document.

2 / 3

Total

7

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (570 lines); consider splitting into references/ and linking

Warning

metadata_version

'metadata.version' is missing

Warning

Total

9

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.