CtrlK
BlogDocsLog inGet started
Tessl Logo

grant-budget-justification

Use grant budget justification for academic writing workflows that need structured execution, explicit assumptions, and clear output boundaries.

49

Quality

37%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Academic Writing/grant-budget-justification/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

40%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies a specific domain (grant budget justification) but fails to describe any concrete actions the skill performs. The language is abstract and process-oriented rather than capability-oriented, making it difficult for Claude to confidently select this skill. It needs specific actions and richer trigger terms to be effective.

Suggestions

Add concrete actions the skill performs, e.g., 'Creates budget tables, writes personnel and equipment cost justifications, calculates indirect costs, and formats budget narratives for grant proposals.'

Expand trigger terms to include natural variations like 'grant proposal budget', 'NSF/NIH budget narrative', 'research funding justification', 'cost justification', 'budget breakdown'.

Strengthen the 'Use when' clause with explicit triggers, e.g., 'Use when the user needs to draft or revise a budget justification section for a grant proposal, or when working with research funding documents.'

DimensionReasoningScore

Specificity

The description does not list any concrete actions. Phrases like 'structured execution', 'explicit assumptions', and 'clear output boundaries' are abstract and vague—they describe qualities rather than specific capabilities like 'create budget tables' or 'calculate cost breakdowns'.

1 / 3

Completeness

The 'when' is partially addressed with 'academic writing workflows that need structured execution, explicit assumptions, and clear output boundaries', but the 'what' is essentially missing—there are no concrete actions described. The 'Use when' equivalent is present but vague.

2 / 3

Trigger Term Quality

It includes some relevant keywords like 'grant budget justification' and 'academic writing', which a user might naturally mention. However, it misses common variations such as 'NSF budget', 'NIH justification', 'research funding', 'personnel costs', 'budget narrative', or 'proposal budget'.

2 / 3

Distinctiveness Conflict Risk

'Grant budget justification' is a fairly specific niche, which helps with distinctiveness. However, the vague qualifiers like 'structured execution' and 'clear output boundaries' could overlap with many academic writing or planning skills.

2 / 3

Total

7

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is heavily padded with generic boilerplate sections (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria) that provide no grant-budget-specific value and consume significant token budget. The actual domain expertise—how to write budget justifications for different categories (equipment, personnel, supplies, travel) and agencies (NIH, NSF)—is almost entirely absent. The skill would benefit enormously from replacing boilerplate with concrete examples of budget justification narratives and agency-specific formatting rules.

Suggestions

Remove or drastically compress generic boilerplate sections (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria) that Claude doesn't need and that contain no domain-specific content.

Add concrete input/output examples showing actual budget justification narratives—e.g., a sample JSON input for a mass spectrometer and the full generated NIH-compliant justification text.

Include agency-specific rules and formatting requirements (NIH vs NSF budget justification differences) as actionable guidance rather than just listing agency names as parameter options.

Eliminate circular cross-references ('See ## Prerequisites above', 'See ## Workflow above') and consolidate duplicated content (py_compile appears three times, workflow steps appear twice).

DimensionReasoningScore

Conciseness

Extremely verbose and repetitive. Multiple sections reference each other circularly ('See ## Prerequisites above', 'See ## Workflow above'). The skill explains generic concepts Claude already knows (execution models, input validation philosophy, error handling principles). Many sections are boilerplate that add no domain-specific value—Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria are all generic templates with no grant-budget-specific content.

1 / 3

Actionability

The Parameters table and CLI commands are concrete and useful. However, the actual grant budget justification logic is never shown—there's no example of what the script produces, no sample input/output pair beyond a one-line stub ('$50,000 for mass spectrometer → Justification emphasizing essentiality'). The workflow steps are abstract process descriptions rather than executable guidance for writing budget justifications.

2 / 3

Workflow Clarity

The Workflow section provides a numbered sequence and the Example Usage section has a run plan, but validation checkpoints are vague ('Validate that the request matches the documented scope'). There's no concrete feedback loop for checking output quality—just generic 'review the generated output.' The error handling section mentions fallbacks but doesn't specify concrete recovery steps for budget justification failures.

2 / 3

Progressive Disclosure

There is a reference to `references/audit-reference.md` and the `references/` directory, which is good. However, the main file is a monolithic wall of text with many sections that could be split out (Risk Assessment, Security Checklist, Evaluation Criteria, Lifecycle Status are all generic boilerplate that bloat the main file). The content that matters for the actual task is buried among administrative sections.

2 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.