CtrlK
BlogDocsLog inGet started
Tessl Logo

research-grants

Write competitive research proposals for NSF, NIH, DOE, DARPA, and Taiwan NSTC. Agency-specific formatting, review criteria, budget preparation, broader impacts, significance statements, innovation narratives, and compliance with submission requirements.

59

Quality

51%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/research-grants/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

82%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong description with excellent specificity and domain-relevant trigger terms that researchers would naturally use. The main weakness is the absence of an explicit 'Use when...' clause, which would help Claude know precisely when to select this skill. Adding trigger guidance would elevate this from good to excellent.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user asks for help writing grant proposals, preparing funding applications, or mentions specific agencies like NSF, NIH, DOE, DARPA, or NSTC.'

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: formatting, review criteria, budget preparation, broader impacts, significance statements, innovation narratives, and compliance with submission requirements. Also names specific agencies (NSF, NIH, DOE, DARPA, Taiwan NSTC).

3 / 3

Completeness

The 'what' is well-covered with specific actions and agencies, but there is no explicit 'Use when...' clause or equivalent trigger guidance. The when is only implied by the nature of the tasks described, which caps this at 2 per the rubric guidelines.

2 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'research proposals', 'NSF', 'NIH', 'DOE', 'DARPA', 'budget preparation', 'broader impacts', 'significance statements'. These are terms researchers naturally use when seeking grant writing help.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive niche targeting competitive research grant proposals for specific funding agencies. The combination of agency names and grant-specific terminology (broader impacts, significance statements) makes it very unlikely to conflict with other skills.

3 / 3

Total

11

/

12

Passed

Implementation

20%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill reads like a comprehensive grant writing textbook rather than an actionable skill for Claude. It is extremely verbose, explaining well-known concepts at length while providing almost no concrete examples of actual proposal text, templates with fill-in patterns, or executable guidance. The content would benefit enormously from being condensed to a concise overview with specific examples, while moving the extensive reference material into the referenced subsidiary files.

Suggestions

Reduce the main SKILL.md to ~100 lines by moving agency-specific details, review criteria, common mistakes, and writing principles into the referenced files (references/*.md) that already exist

Add concrete, copy-paste-ready examples: a sample NSF Project Summary paragraph, a sample NIH Specific Aims opening, a sample budget justification entry—showing actual proposal language rather than describing what to write

Remove generic writing advice Claude already knows (use active voice, avoid jargon, be clear, use topic sentences) and focus only on grant-specific knowledge that Claude wouldn't have

Add a decision-tree or quick-reference table at the top: given agency + mechanism, here are the exact page limits, required sections, and key differentiators—making the skill immediately actionable

DimensionReasoningScore

Conciseness

Extremely verbose at 500+ lines. Extensively explains concepts Claude already knows (what a PDF is equivalent: what NSF is, what broader impacts are, basic writing advice like 'use active voice', 'avoid jargon'). Lists of common mistakes, writing principles, and persuasion strategies are generic knowledge that waste tokens. The 'When to Use This Skill' section is entirely unnecessary.

1 / 3

Actionability

Despite its length, the skill provides almost no concrete, executable guidance. There are no actual proposal text examples, no template fill-in patterns, no specific language to use. It reads as a textbook overview of grant writing rather than actionable instructions. The only concrete command is the schematic generation script, which is tangential to grant writing itself.

1 / 3

Workflow Clarity

The Phase 1-5 workflow for grant development is reasonably well-sequenced with clear phases and outputs. However, it lacks validation checkpoints (e.g., no explicit 'verify compliance with page limits before proceeding' or 'check all review criteria are addressed'). The workflow is more of a project management timeline than an operational workflow for Claude to follow when writing.

2 / 3

Progressive Disclosure

References to external files (references/*.md, assets/*.md) are present and well-organized, which is good. However, the main SKILL.md contains enormous amounts of content that should be in those reference files instead—agency-specific details, review criteria, proposal types, and common mistakes are all inline rather than delegated, creating a monolithic document.

2 / 3

Total

6

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (941 lines); consider splitting into references/ and linking

Warning

metadata_version

'metadata.version' is missing

Warning

Total

9

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.