Simulates NIH study section peer review for grant proposals. Triggers when user wants mock review, critique, or evaluation of a grant proposal before submission. Generates structured critique using official NIH scoring rubric (1-9 scale), identifies weaknesses, provides actionable revision recommendations, and produces a comprehensive review summary similar to actual NIH Summary Statement.
Install with Tessl CLI
npx tessl i github:aipoch/medical-research-skills --skill grant-mock-reviewer77
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
A simulated NIH study section reviewer that provides structured, rigorous critique of grant proposals using the official NIH scoring criteria and methodology.
# Full mock review with Summary Statement
python3 scripts/main.py --input proposal.pdf --format pdf --output review.md
# Review Specific Aims only
python3 scripts/main.py --input aims.pdf --section aims --output aims_review.md
# Targeted review (specific criterion focus)
python3 scripts/main.py --input proposal.pdf --focus approach --output approach_critique.md
# Generate NIH-style scores only
python3 scripts/main.py --input proposal.pdf --scores-only --output scores.json
# Compare before/after revision
python3 scripts/main.py --original original.pdf --revised revised.pdf --comparefrom scripts.main import GrantMockReviewer
reviewer = GrantMockReviewer()
result = reviewer.review(
proposal_text=proposal_content,
grant_type="R01",
section="full"
)
print(result.summary_statement)
print(result.scores)| Parameter | Type | Default | Required | Description |
|---|---|---|---|---|
--input | string | - | Yes | Path to proposal file (PDF, DOCX, TXT, MD) |
--format | string | auto | No | Input file format (pdf, docx, txt, md) |
--section | string | full | No | Section to review (full, aims, significance, innovation, approach) |
--grant-type | string | R01 | No | Grant mechanism (R01, R21, R03, K99, F32) |
--focus | string | - | No | Focus on specific criterion (significance, investigator, innovation, approach, environment) |
--scores-only | flag | false | No | Output scores only (JSON) |
--output, -o | string | stdout | No | Output file path |
--original | string | - | No | Original proposal for comparison |
--revised | string | - | No | Revised proposal for comparison |
--compare | flag | false | No | Enable comparison mode |
The single most important score reflecting the likelihood of the project to exert a sustained, powerful influence on the research field.
| Score | Descriptor | Likelihood of Funding |
|---|---|---|
| 1 | Exceptional | Very High |
| 2 | Outstanding | High |
| 3 | Excellent | Good |
| 4 | Very Good | Moderate |
| 5 | Good | Low-Moderate |
| 6 | Satisfactory | Low |
| 7 | Fair | Very Low |
| 8 | Marginal | Unlikely |
| 9 | Poor | Not Fundable |
Overall Impact: [Score] - [Descriptor]
Criterion Scores:
- Significance: [Score]
- Investigator(s): [Score]
- Innovation: [Score]
- Approach: [Score]
- Environment: [Score]Bullet-point list of major strengths by criterion
Bullet-point list of major weaknesses by criterion
Paragraph-form critique for each criterion following NIH style
Complete narrative synthesis of the review
Prioritized, actionable suggestions for improvement
High - Requires deep understanding of NIH peer review processes, ability to apply standardized scoring rubrics consistently, and generation of clinically/scientifically accurate critique across diverse research domains.
Review Required: Human verification recommended before deployment in production settings.
references/nih_scoring_rubric.md - Complete NIH scoring guidelinesreferences/review_criteria_explained.md - Detailed criterion descriptionsreferences/common_weaknesses_catalog.md - Database of typical proposal flawsreferences/summary_statement_templates.md - NIH-style statement templatesreferences/score_calibration_guide.md - Score assignment guidelines1.0.0 - Initial release with NIH R01/R21/R03 support
| Risk Indicator | Assessment | Level |
|---|---|---|
| Code Execution | Python/R scripts executed locally | Medium |
| Network Access | No external API calls | Low |
| File System Access | Read input files, write output files | Medium |
| Instruction Tampering | Standard prompt guidelines | Low |
| Data Exposure | Output files saved to workspace | Low |
# Python dependencies
pip install -r requirements.txtf11484c
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.