CtrlK
BlogDocsLog inGet started
Tessl Logo

grant-proposal

Draft a structured grant proposal from research ideas and literature. Supports KAKENHI (Japan), NSF (US), NSFC (China, including 面上/青年/优青/杰青/海外优青/重点), ERC (EU), DFG (Germany), SNSF (Switzerland), ARC (Australia), NWO (Netherlands), and generic formats. Use when user says "write grant", "grant proposal", "申請書", "write KAKENHI", "科研費", "基金申请", "写基金", "NSF proposal", or wants to turn research ideas into a funding application.

73

Quality

70%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Critical

Do not install without reviewing

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/grant-proposal/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an excellent skill description that clearly defines its purpose, lists specific supported funding agencies with impressive detail (including NSFC sub-categories), and provides comprehensive multilingual trigger terms. It uses proper third-person voice, has an explicit 'Use when' clause, and occupies a clearly distinct niche that would not conflict with other skills.

DimensionReasoningScore

Specificity

Lists a concrete action ('Draft a structured grant proposal from research ideas and literature') and enumerates multiple specific funding agencies with sub-types (e.g., NSFC sub-categories like 面上/青年/优青/杰青/海外优青/重点). This is highly specific about what the skill does.

3 / 3

Completeness

Clearly answers 'what' (draft structured grant proposals from research ideas, supporting multiple funding agencies) and 'when' (explicit 'Use when...' clause with specific trigger phrases and a general condition about turning research ideas into funding applications).

3 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms across multiple languages: 'write grant', 'grant proposal', '申請書', '科研費', '基金申请', '写基金', 'NSF proposal', 'funding application', plus agency names like KAKENHI, NSF, NSFC, ERC, DFG, etc. These are terms users would naturally say.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive niche: grant proposal writing for specific funding agencies. The combination of agency names (KAKENHI, NSF, NSFC, ERC, DFG, SNSF, ARC, NWO) and multilingual trigger terms makes it very unlikely to conflict with other skills.

3 / 3

Total

12

/

12

Passed

Implementation

39%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill has an exceptionally well-designed workflow with clear phases, checkpoints, state persistence, and error recovery — the workflow clarity is its strongest dimension. However, it is severely undermined by its massive length: grant type specifications, drafting guidelines, and sub-skill orchestration details are all inlined into a single monolithic file, making it extremely token-inefficient. The actionability is moderate — it orchestrates sub-skills well but lacks concrete examples of actual grant prose or templates.

Suggestions

Extract each grant type specification (KAKENHI, NSF, NSFC, ERC, DFG, SNSF, ARC, NWO) into separate reference files (e.g., `grant-types/KAKENHI.md`) and reference them from the main SKILL.md with one-line links.

Remove explanations of concepts Claude already knows, such as the difference between grants and papers, what 'independently valuable aims' means, and general grant-writing philosophy — these add ~50 lines of unnecessary content.

Add a concrete example of good grant prose output (e.g., a sample gap statement, a sample aim paragraph) rather than just describing what to write — this would significantly improve actionability.

Move the detailed Codex MCP prompt templates and LaTeX output structure into separate reference files to reduce the main skill to a concise workflow overview.

DimensionReasoningScore

Conciseness

Extremely verbose at ~500+ lines. Explains concepts Claude already knows (e.g., 'Grant proposals argue for future work (feasibility + potential), not completed work (results + claims)'), includes extensive tables of grant agency details that could be in separate reference files, and repeats information across sections (e.g., grant-specific cultural norms appear in both the specifications table and the drafting guidelines).

1 / 3

Actionability

Provides concrete workflow steps, checkpoint formats, and file output structures, but much of the 'actionable' content is actually orchestration of other sub-skills (e.g., '/research-lit', '/research-review') rather than executable code. The Codex MCP calls are specific but the actual grant-writing instructions are more descriptive than prescriptive — telling Claude to 'write complete prose' without showing examples of good grant prose.

2 / 3

Workflow Clarity

Excellent multi-phase workflow (Phase 0-5) with explicit checkpoints marked by ⛔ STOP symbols, user confirmation gates, state persistence via JSON, resume logic, and a clear revision feedback loop with severity levels (CRITICAL/MAJOR/MINOR). The final checklist provides thorough validation.

3 / 3

Progressive Disclosure

Monolithic wall of text with no references to external files. The grant type specifications (KAKENHI, NSF, NSFC, ERC, DFG, SNSF, ARC, NWO) should each be in separate reference files. The entire document is ~500+ lines when the SKILL.md should be an overview pointing to detailed materials for each grant type, drafting guidelines, and review templates.

1 / 3

Total

7

/

12

Passed

Validation

72%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation8 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (621 lines); consider splitting into references/ and linking

Warning

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

8

/

11

Passed

Repository
wanshuiyin/Auto-claude-code-research-in-sleep
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.