Simulates NIH study section peer review for grant proposals. Triggers.
37
22%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/Academic Writing/grant-mock-reviewer/SKILL.mdA simulated NIH study section reviewer that provides structured, rigorous critique of grant proposals using the official NIH scoring criteria and methodology.
scripts/main.py.references/ for task-specific guidance.See ## Prerequisites above for related details.
Python: 3.10+. Repository baseline for current packaged skills.dataclasses: unspecified. Declared in requirements.txt.enum: unspecified. Declared in requirements.txt.See ## Usage above for related details.
cd "20260318/scientific-skills/Academic Writing/grant-mock-reviewer"
python -m py_compile scripts/main.py
python scripts/main.py --helpExample run plan:
CONFIG block or documented parameters if the script uses fixed settings.python scripts/main.py with the validated inputs.See ## Workflow above for related details.
scripts/main.py.references/ contains supporting rules, prompts, or checklists.Use this command to verify that the packaged script entry point can be parsed before deeper execution.
python -m py_compile scripts/main.pyUse these concrete commands for validation. They are intentionally self-contained and avoid placeholder paths.
python -m py_compile scripts/main.py
python scripts/main.py --help
python scripts/main.py -h
python scripts/main.py --help# Full mock review with Summary Statement
python3 scripts/main.py --input proposal.pdf --format pdf --output review.md
# Review Specific Aims only
python3 scripts/main.py --input aims.pdf --section aims --output aims_review.md
# Targeted review (specific criterion focus)
python3 scripts/main.py --input proposal.pdf --focus approach --output approach_critique.md
# Generate NIH-style scores only
python3 scripts/main.py --input proposal.pdf --scores-only --output scores.json
# Compare before/after revision
python3 scripts/main.py --original original.pdf --revised revised.pdf --comparefrom scripts.main import GrantMockReviewer
reviewer = GrantMockReviewer()
result = reviewer.review(
proposal_text=proposal_content,
grant_type="R01",
section="full"
)
print(result.summary_statement)
print(result.scores)| Parameter | Type | Default | Required | Description |
|---|---|---|---|---|
--input | string | - | Yes | Path to proposal file (PDF, DOCX, TXT, MD) |
--format | string | auto | No | Input file format (pdf, docx, txt, md) |
--section | string | full | No | Section to review (full, aims, significance, innovation, approach) |
--grant-type | string | R01 | No | Grant mechanism (R01, R21, R03, K99, F32) |
--focus | string | - | No | Focus on specific criterion (significance, investigator, innovation, approach, environment) |
--scores-only | flag | false | No | Output scores only (JSON) |
--output, -o | string | stdout | No | Output file path |
--original | string | - | No | Original proposal for comparison |
--revised | string | - | No | Revised proposal for comparison |
--compare | flag | false | No | Enable comparison mode |
The single most important score reflecting the likelihood of the project to exert a sustained, powerful influence on the research field.
| Score | Descriptor | Likelihood of Funding |
|---|---|---|
| 1 | Exceptional | Very High |
| 2 | Outstanding | High |
| 3 | Excellent | Good |
| 4 | Very Good | Moderate |
| 5 | Good | Low-Moderate |
| 6 | Satisfactory | Low |
| 7 | Fair | Very Low |
| 8 | Marginal | Unlikely |
| 9 | Poor | Not Fundable |
Overall Impact: [Score] - [Descriptor]
Criterion Scores:
- Significance: [Score]
- Investigator(s): [Score]
- Innovation: [Score]
- Approach: [Score]
- Environment: [Score]Bullet-point list of major strengths by criterion
Bullet-point list of major weaknesses by criterion
Paragraph-form critique for each criterion following NIH style
Complete narrative synthesis of the review
Prioritized, actionable suggestions for improvement
High - Requires deep understanding of NIH peer review processes, ability to apply standardized scoring rubrics consistently, and generation of clinically/scientifically accurate critique across diverse research domains.
Review Required: Human verification recommended before deployment in production settings.
references/nih_scoring_rubric.md - Complete NIH scoring guidelinesreferences/review_criteria_explained.md - Detailed criterion descriptionsreferences/common_weaknesses_catalog.md - Database of typical proposal flawsreferences/summary_statement_templates.md - NIH-style statement templatesreferences/score_calibration_guide.md - Score assignment guidelines1.0.0 - Initial release with NIH R01/R21/R03 support
| Risk Indicator | Assessment | Level |
|---|---|---|
| Code Execution | Python/R scripts executed locally | Medium |
| Network Access | No external API calls | Low |
| File System Access | Read input files, write output files | Medium |
| Instruction Tampering | Standard prompt guidelines | Low |
| Data Exposure | Output files saved to workspace | Low |
# Python dependencies
pip install -r requirements.txtEvery final response should make these items explicit when they are relevant:
scripts/main.py fails, report the failure point, summarize what still can be completed safely, and provide a manual fallback.This skill accepts requests that match the documented purpose of grant-mock-reviewer and include enough context to complete the workflow safely.
Do not continue the workflow when the request is out of scope, missing a critical input, or would require unsupported assumptions. Instead respond:
grant-mock-revieweronly handles its documented workflow. Please provide the missing required inputs or switch to a more suitable skill.
Use the following fixed structure for non-trivial requests:
If the request is simple, you may compress the structure, but still keep assumptions and limits explicit when they affect correctness.
e1f6461
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.