CtrlK
BlogDocsLog inGet started
Tessl Logo

grant-mock-reviewer

Simulates NIH study section peer review for grant proposals. Triggers when user wants mock review, critique, or evaluation of a grant proposal before submission. Generates structured critique using official NIH scoring rubric (1-9 scale), identifies weaknesses, provides actionable revision recommendations, and produces a comprehensive review summary similar to actual NIH Summary Statement.

Install with Tessl CLI

npx tessl i github:aipoch/medical-research-skills --skill grant-mock-reviewer
What are skills?

77

Does it follow best practices?

Validation for skill structure

SKILL.md
Review
Evals

Evaluation results

46%

-18%

Mock NIH Peer Review: R01 Grant Proposal

Full NIH Summary Statement output

Criteria
Without context
With context

1-9 scale direction

87%

0%

Five criteria scored

100%

100%

Correct score descriptors

20%

40%

MOCK SUMMARY STATEMENT header

0%

100%

Estimated Percentile

0%

0%

Resume and Summary section

0%

0%

Per-criterion strengths bullets

100%

100%

Per-criterion weaknesses bullets

100%

0%

Per-criterion narrative

100%

0%

Additional Review Considerations

75%

0%

Revision recommendations section

100%

100%

Priority-ordered recommendations

90%

100%

Without context: $0.2447 · 1m 48s · 12 turns · 16 in / 5,089 out tokens

With context: $0.5163 · 1m 14s · 21 turns · 8,442 in / 3,732 out tokens

100%

85%

Grant Scoring Dashboard: Machine-Readable Output

Machine-readable JSON scores output

Criteria
Without context
With context

Uses scripts/main.py

0%

100%

--scores-only flag

0%

100%

--output flag used

0%

100%

Output is valid JSON

100%

100%

overall_impact field

0%

100%

priority_score field

0%

100%

criteria object present

0%

100%

Per-criterion score field

0%

100%

Per-criterion strengths field

0%

100%

Per-criterion weaknesses field

0%

100%

Without context: $0.5049 · 2m 29s · 18 turns · 21 in / 9,163 out tokens

With context: $0.4577 · 1m 16s · 18 turns · 8,441 in / 3,599 out tokens

100%

64%

Assessing Grant Revision Impact

Before/after proposal revision comparison

Criteria
Without context
With context

Uses scripts/main.py

0%

100%

--compare flag

0%

100%

--original flag

0%

100%

--revised flag

0%

100%

Output saved to file

0%

100%

Original impact score shown

100%

100%

Revised impact score shown

100%

100%

Criterion-level changes shown

100%

100%

Direction indicators present

100%

100%

Without context: $0.5348 · 2m 59s · 19 turns · 21 in / 10,022 out tokens

With context: $0.4641 · 1m 34s · 17 turns · 8,438 in / 4,585 out tokens

Evaluated
Agent
Claude Code

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.