CtrlK
BlogDocsLog inGet started
Tessl Logo

peer-review

Conduct professional peer reviews for papers or theses, providing structured evaluations and improvement suggestions; use when you need a pre-submission assessment, an internal review, or academic quality control.

80

Quality

76%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Academic Writing/peer-review/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

75%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description is well-structured with a clear 'use when' clause that explicitly covers trigger scenarios, making it strong on completeness and distinctiveness. However, it could benefit from more specific concrete actions beyond 'structured evaluations and improvement suggestions', and from broader coverage of natural trigger terms users might employ when seeking academic review assistance.

Suggestions

Add more specific concrete actions such as 'evaluate methodology, assess argument structure, check citation quality, identify logical gaps, and suggest revisions'.

Expand trigger terms to include natural variations like 'manuscript review', 'paper feedback', 'research critique', 'journal submission', 'conference paper', or 'dissertation review'.

DimensionReasoningScore

Specificity

The description names the domain (peer reviews for papers/theses) and mentions some actions ('structured evaluations and improvement suggestions'), but doesn't list multiple specific concrete actions like identifying methodology flaws, checking citations, evaluating statistical analysis, or assessing argument structure.

2 / 3

Completeness

Clearly answers both 'what' (conduct professional peer reviews providing structured evaluations and improvement suggestions) and 'when' (pre-submission assessment, internal review, academic quality control) with explicit trigger guidance via the 'use when' clause.

3 / 3

Trigger Term Quality

Includes some relevant terms like 'peer review', 'papers', 'theses', 'pre-submission assessment', and 'academic quality control', but misses common natural variations users might say such as 'manuscript review', 'paper feedback', 'research critique', 'journal submission', or 'reviewer comments'.

2 / 3

Distinctiveness Conflict Risk

The description carves out a clear niche around academic peer review specifically, with distinct triggers like 'peer review', 'pre-submission assessment', and 'academic quality control' that are unlikely to conflict with general writing or editing skills.

3 / 3

Total

10

/

12

Passed

Implementation

77%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a solid, well-structured peer review skill with a highly actionable template and clear workflow. Its main weaknesses are moderate verbosity—explaining concepts Claude already knows (novelty, significance, rigor) and including redundant sections (Key Features largely duplicates the workflow)—and inline content that could be better distributed to referenced files. The workflow clarity and actionability are strong points.

Suggestions

Remove or significantly trim the 'Key Features' section since it largely restates what the workflow and template already demonstrate.

Move the 'Key Parameters / Criteria' definitions to `references/guide.md` since Claude already understands concepts like novelty, rigor, and reproducibility—only include domain-specific thresholds or non-obvious criteria inline.

DimensionReasoningScore

Conciseness

The skill includes some unnecessary elaboration—e.g., the 'When to Use' section with 5 bullet points explaining obvious use cases, and the 'Key Features' section which largely restates what the workflow already shows. The 'Key Parameters / Criteria' section explains concepts like novelty and rigor that Claude already understands. However, it's not egregiously verbose.

2 / 3

Actionability

The skill provides a complete, copy-paste-ready markdown template with specific sections and prompts to fill in. The workflow algorithm gives concrete sequential steps, and each issue in the template includes structured sub-fields (why it matters, suggested fix, expected impact). This is highly actionable guidance.

3 / 3

Workflow Clarity

The review workflow is clearly sequenced in 5 numbered steps with logical progression from reading to evaluation to issue organization to recommendation. The issue classification into major vs. minor provides a clear triage checkpoint, and the template enforces structured output at each stage. Since this is a non-destructive analytical task, explicit validation/feedback loops are less critical.

3 / 3

Progressive Disclosure

The skill references external files (`assets/peer_review_template.md` and `references/guide.md`) for templates and checklists, which is good progressive disclosure. However, the main file itself is somewhat long with the full template inlined AND the workflow algorithm AND the key parameters section—some of this content could be better split. The references are mentioned but not clearly signaled with descriptions of what each contains.

2 / 3

Total

10

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.