CtrlK
BlogDocsLog inGet started
Tessl Logo

grant-proposal

Draft a structured grant proposal from research ideas and literature. Supports KAKENHI (Japan), NSF (US), NSFC (China, including 面上/青年/优青/杰青/海外优青/重点), ERC (EU), DFG (Germany), SNSF (Switzerland), ARC (Australia), NWO (Netherlands), and generic formats. Use when user says "write grant", "grant proposal", "申請書", "write KAKENHI", "科研費", "基金申请", "写基金", "NSF proposal", or wants to turn research ideas into a funding application.

79

Quality

77%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/grant-proposal/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an excellent skill description that clearly defines its purpose, lists specific supported funding agencies with impressive detail (including NSFC sub-categories), and provides comprehensive multilingual trigger terms. It follows the recommended pattern with an explicit 'Use when...' clause and uses third-person voice throughout. The description is both concise and information-dense.

DimensionReasoningScore

Specificity

Lists a concrete action ('Draft a structured grant proposal from research ideas and literature') and enumerates multiple specific funding agencies with sub-types (e.g., NSFC sub-categories like 面上/青年/优青/杰青). This is highly specific about what the skill does.

3 / 3

Completeness

Clearly answers both 'what' (draft structured grant proposals from research ideas and literature, supporting multiple funding agencies) and 'when' (explicit 'Use when...' clause with specific trigger phrases and the general case of turning research ideas into funding applications).

3 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms across multiple languages: 'write grant', 'grant proposal', '申請書', '科研費', '基金申请', '写基金', 'NSF proposal', 'funding application', plus specific agency names like KAKENHI, NSF, NSFC, ERC, DFG, etc. These are terms users would naturally say.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive niche: grant proposal writing for specific funding agencies. The combination of agency names (KAKENHI, NSF, NSFC, ERC, DFG, SNSF, ARC, NWO) and multilingual trigger terms makes it very unlikely to conflict with other skills.

3 / 3

Total

12

/

12

Passed

Implementation

55%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is exceptionally thorough and actionable with a well-designed multi-phase workflow including explicit checkpoints, state persistence, and review loops. However, it is severely undermined by its length (~500+ lines) — the grant type specifications, detailed prompt templates, and drafting guidelines should be split into separate reference files rather than inlined, making this a textbook case of poor progressive disclosure despite excellent workflow design and actionability.

Suggestions

Extract each grant type specification (KAKENHI, NSF, NSFC, ERC, DFG, SNSF, ARC, NWO) into separate reference files (e.g., `grant-types/KAKENHI.md`) and reference them from the main skill with one-line descriptions.

Move the detailed Codex prompt templates (Phase 4 review prompts) into a separate `grant-proposal/REVIEW_PROMPTS.md` file, keeping only the invocation pattern inline.

Remove explanatory content Claude already knows, such as the distinction between grants and papers, what 'independently valuable' aims means, and general grant writing philosophy — keep only the specific instructions and templates.

Consolidate the grant-specific drafting guidelines (Phase 3) into the per-agency reference files suggested above, rather than repeating cultural norms that already appear in the specifications tables.

DimensionReasoningScore

Conciseness

This skill is extremely verbose at ~500+ lines. It explains concepts Claude already knows (e.g., 'Grant proposals argue for future work (feasibility + potential), not completed work (results + claims)'), includes extensive tables of grant agency details that could be in separate reference files, and repeats information across sections (e.g., cultural norms appear in both the specifications table and the drafting guidelines).

1 / 3

Actionability

The skill provides highly concrete, executable guidance: specific MCP tool invocations with exact prompt templates, detailed file structures, JSON state schemas, specific commands for each phase, and copy-paste ready checkpoint messages. Every step has clear instructions on what to do and what to produce.

3 / 3

Workflow Clarity

The 6-phase workflow (Phase 0-5) is clearly sequenced with explicit checkpoints marked by ⛔ STOP symbols, user response options at each checkpoint, state persistence between phases, error recovery via GRANT_STATE.json, and a final checklist for validation. Feedback loops are built into the review-revise cycle with severity-based triage.

3 / 3

Progressive Disclosure

This is a monolithic wall of text. The grant type specifications (KAKENHI, NSF, NSFC, ERC, DFG, SNSF, ARC, NWO) should each be in separate reference files rather than inline. The detailed drafting guidelines, Codex prompt templates, and figure generation instructions could all be split into referenced sub-documents. The skill references shared protocols at the end but fails to apply the same principle to its own massive content.

1 / 3

Total

8

/

12

Passed

Validation

72%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation8 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (628 lines); consider splitting into references/ and linking

Warning

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

8

/

11

Passed

Repository
wanshuiyin/Auto-claude-code-research-in-sleep
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.