Simulates NIH study section peer review for grant proposals. Triggers when user wants mock review, critique, or evaluation of a grant proposal before submission. Generates structured critique using official NIH scoring rubric (1-9 scale), identifies weaknesses, provides actionable revision recommendations, and produces a comprehensive review summary similar to actual NIH Summary Statement.
Install with Tessl CLI
npx tessl i github:aipoch/medical-research-skills --skill grant-mock-reviewer77
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Full NIH Summary Statement output
1-9 scale direction
87%
0%
Five criteria scored
100%
100%
Correct score descriptors
20%
40%
MOCK SUMMARY STATEMENT header
0%
100%
Estimated Percentile
0%
0%
Resume and Summary section
0%
0%
Per-criterion strengths bullets
100%
100%
Per-criterion weaknesses bullets
100%
0%
Per-criterion narrative
100%
0%
Additional Review Considerations
75%
0%
Revision recommendations section
100%
100%
Priority-ordered recommendations
90%
100%
Without context: $0.2447 · 1m 48s · 12 turns · 16 in / 5,089 out tokens
With context: $0.5163 · 1m 14s · 21 turns · 8,442 in / 3,732 out tokens
Machine-readable JSON scores output
Uses scripts/main.py
0%
100%
--scores-only flag
0%
100%
--output flag used
0%
100%
Output is valid JSON
100%
100%
overall_impact field
0%
100%
priority_score field
0%
100%
criteria object present
0%
100%
Per-criterion score field
0%
100%
Per-criterion strengths field
0%
100%
Per-criterion weaknesses field
0%
100%
Without context: $0.5049 · 2m 29s · 18 turns · 21 in / 9,163 out tokens
With context: $0.4577 · 1m 16s · 18 turns · 8,441 in / 3,599 out tokens
Before/after proposal revision comparison
Uses scripts/main.py
0%
100%
--compare flag
0%
100%
--original flag
0%
100%
--revised flag
0%
100%
Output saved to file
0%
100%
Original impact score shown
100%
100%
Revised impact score shown
100%
100%
Criterion-level changes shown
100%
100%
Direction indicators present
100%
100%
Without context: $0.5348 · 2m 59s · 19 turns · 21 in / 10,022 out tokens
With context: $0.4641 · 1m 34s · 17 turns · 8,438 in / 4,585 out tokens
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.