CtrlK
BlogDocsLog inGet started
Tessl Logo

peer-review

Conduct professional peer reviews for papers or theses, providing structured evaluations and improvement suggestions; use when you need a pre-submission assessment, an internal review, or academic quality control.

80

Quality

76%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Academic Writing/peer-review/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

75%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description is well-structured with clear 'what' and 'when' clauses, making it functionally complete. Its main weakness is moderate specificity—it describes the general activity of peer review without enumerating the concrete sub-tasks involved. Trigger term coverage could also be expanded to include more natural user phrasings like 'manuscript', 'journal submission', or 'research critique'.

Suggestions

Add more specific concrete actions such as 'evaluating methodology, checking argument coherence, assessing citation quality, identifying gaps in literature review'.

Expand trigger terms to include natural variations like 'manuscript review', 'journal submission feedback', 'research critique', 'reviewer comments', and 'conference paper'.

DimensionReasoningScore

Specificity

Names the domain (peer review of papers/theses) and mentions some actions ('structured evaluations', 'improvement suggestions'), but doesn't list multiple concrete specific actions like identifying methodology flaws, checking citations, evaluating statistical analysis, etc.

2 / 3

Completeness

Clearly answers both 'what' (conduct professional peer reviews providing structured evaluations and improvement suggestions) and 'when' (use when you need a pre-submission assessment, an internal review, or academic quality control) with explicit trigger guidance.

3 / 3

Trigger Term Quality

Includes some relevant terms like 'peer review', 'papers', 'theses', 'pre-submission assessment', and 'academic quality control', but misses common natural variations users might say such as 'manuscript review', 'paper feedback', 'research critique', 'journal submission', or 'reviewer comments'.

2 / 3

Distinctiveness Conflict Risk

The description carves out a clear niche around academic peer review specifically, with distinct triggers like 'pre-submission assessment' and 'academic quality control' that are unlikely to conflict with general writing or editing skills.

3 / 3

Total

10

/

12

Passed

Implementation

77%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a solid, actionable skill with a clear workflow and a useful copy-paste template for peer reviews. Its main weakness is moderate verbosity—the 'When to Use', 'Key Features', and 'Key Parameters' sections add bulk without proportional value, and the inline template likely duplicates the referenced template file. The workflow sequencing is well done for this type of analytical task.

Suggestions

Remove or significantly trim the 'When to Use' and 'Key Features' sections—Claude doesn't need five bullet points explaining when peer review is useful.

Move the 'Key Parameters / Criteria' section into `references/guide.md` since it restates standard academic review concepts Claude already knows.

Consider whether the full inline template is needed given it references `assets/peer_review_template.md`—a brief example excerpt with a pointer to the template file would be more token-efficient.

DimensionReasoningScore

Conciseness

The skill contains some unnecessary elaboration—the 'When to Use' section is overly detailed for Claude, and the 'Key Features' bullet list restates what the workflow already demonstrates. The key parameters/criteria section largely describes concepts Claude already understands (novelty, significance, rigor). However, the template and workflow sections are reasonably efficient.

2 / 3

Actionability

The skill provides a complete, copy-paste-ready markdown template with specific sections and prompts to fill in. The review workflow is concrete with clear steps, and each issue format includes structured fields (issue, why it matters, suggested fix, expected impact). This is highly actionable guidance.

3 / 3

Workflow Clarity

The 5-step review workflow is clearly sequenced from reading through evaluation, verification, issue organization, to recommendation. Each step has explicit sub-tasks. Since this is a non-destructive analytical task (reviewing a paper), the absence of validation/retry loops is appropriate—the workflow naturally builds from understanding to judgment.

3 / 3

Progressive Disclosure

The skill references external files (`assets/peer_review_template.md` and `references/guide.md`) for templates and checklists, which is good progressive disclosure. However, the inline template example is quite long and duplicates what's presumably in the template file, and the 'Key Parameters / Criteria' section could be in the reference guide rather than inline.

2 / 3

Total

10

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.