Write competitive research proposals for NSF, NIH, DOE, DARPA, and Taiwan NSTC. Agency-specific formatting, review criteria, budget preparation, broader impacts, significance statements, innovation narratives, and compliance with submission requirements.
47
51%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/research-grants/SKILL.mdQuality
Discovery
82%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong description with excellent specificity and distinctiveness, naming concrete agencies and grant-writing tasks that serve as natural trigger terms. Its main weakness is the absence of an explicit 'Use when...' clause, which would help Claude know precisely when to select this skill over others. Adding trigger guidance would elevate this from good to excellent.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks for help writing, reviewing, or improving grant proposals, funding applications, or research submissions for federal or international agencies.'
Consider adding common user phrasings like 'grant writing', 'funding application', 'specific aims', or 'project narrative' to broaden trigger term coverage.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: formatting, review criteria, budget preparation, broader impacts, significance statements, innovation narratives, and compliance with submission requirements. Also names specific agencies (NSF, NIH, DOE, DARPA, Taiwan NSTC). | 3 / 3 |
Completeness | The 'what' is well-covered with specific actions and agencies, but there is no explicit 'Use when...' clause or equivalent trigger guidance. The when is only implied by the nature of the tasks described, which caps this at 2 per the rubric guidelines. | 2 / 3 |
Trigger Term Quality | Includes strong natural keywords users would say: 'research proposals', 'NSF', 'NIH', 'DOE', 'DARPA', 'budget preparation', 'broader impacts', 'significance statements'. These are terms researchers naturally use when seeking grant writing help. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive niche targeting competitive research grant proposals for specific funding agencies. The combination of agency names (NSF, NIH, DOE, DARPA, Taiwan NSTC) and grant-specific terminology (broader impacts, review criteria) makes it very unlikely to conflict with other skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
20%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill reads like a comprehensive grant writing textbook rather than an efficient, actionable skill file for Claude. It is extremely verbose, explaining basic writing principles and concepts Claude already knows, while failing to provide concrete, executable templates or example text. The structure references external files appropriately but duplicates much of what should be in those files inline, resulting in a bloated main document that wastes context window tokens.
Suggestions
Reduce content by 70-80%: Remove generic writing advice (persuasive argumentation, active voice, visual design principles), lists of common mistakes, and explanations of concepts Claude already knows. Focus only on agency-specific requirements and constraints that Claude wouldn't know.
Add concrete, copy-paste-ready templates with example text for key sections (e.g., a complete NSF Project Summary example, an NIH Specific Aims page with actual sample language, a budget justification paragraph).
Move all agency-specific detail sections into the referenced files (references/nsf_guidelines.md, etc.) and keep only a brief comparison table in the main SKILL.md showing key differences (page limits, review criteria weights, budget formats).
Add explicit validation checkpoints to the workflow, such as 'Verify page count compliance before internal review' and 'Cross-check budget line items against activities described in each aim before finalization.'
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at 500+ lines. Extensively explains concepts Claude already knows (what a PDF is, what active voice means, what a Gantt chart is, basic writing advice like 'use strong verbs'). Lists of common mistakes, writing principles, and persuasive argumentation strategies are generic knowledge that wastes token budget. The 'When to Use This Skill' section is entirely unnecessary. | 1 / 3 |
Actionability | Despite its length, the skill provides almost no executable, copy-paste-ready content. It's overwhelmingly descriptive and advisory rather than instructional. There are no concrete proposal templates with actual text, no example passages showing good vs. bad writing, and no specific formatting commands. The one code block is for generating schematics, not for grant writing itself. The content reads like a textbook chapter, not actionable instructions. | 1 / 3 |
Workflow Clarity | The 5-phase workflow for grant development is clearly sequenced with outputs listed for each phase, which is good. However, there are no validation checkpoints or feedback loops within the writing process itself (e.g., no 'verify compliance with page limits before proceeding' or 'check alignment between aims and budget'). The workflow is more of a project management timeline than an operational procedure with decision points. | 2 / 3 |
Progressive Disclosure | References to external files (references/*.md, assets/*.md, scripts/) are well-signaled and one level deep, which is good. However, no bundle files were provided, so these references may be broken. The main SKILL.md itself is a monolithic wall of text that should have pushed most agency-specific details into the referenced files rather than duplicating extensive content inline. | 2 / 3 |
Total | 6 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (941 lines); consider splitting into references/ and linking | Warning |
metadata_version | 'metadata.version' is missing | Warning |
Total | 9 / 11 Passed | |
cbcae7b
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.