CtrlK
BlogDocsLog inGet started
Tessl Logo

grant-budget-justification

Use grant budget justification for academic writing workflows that need structured execution, explicit assumptions, and clear output boundaries.

49

Quality

37%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Academic Writing/grant-budget-justification/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

40%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies a specific domain (grant budget justification) but fails to articulate what the skill actually does—no concrete actions, outputs, or deliverables are mentioned. The 'when' clause exists but is filled with abstract process language rather than actionable triggers. It needs substantial improvement in specificity and completeness to be useful for skill selection.

Suggestions

Add specific concrete actions the skill performs, e.g., 'Generates line-item budget justifications, calculates personnel costs, and formats budget narratives for grant proposals.'

Expand trigger terms with natural user language variations such as 'NSF budget', 'NIH proposal', 'budget narrative', 'funding request justification', 'grant proposal budget'.

Replace abstract phrases like 'structured execution, explicit assumptions, and clear output boundaries' with concrete scenarios, e.g., 'Use when drafting or revising budget justification sections for federal or institutional grant applications.'

DimensionReasoningScore

Specificity

The description does not list any concrete actions. 'Structured execution, explicit assumptions, and clear output boundaries' are abstract process descriptors, not specific capabilities like 'calculate budget totals' or 'generate line-item justifications'.

1 / 3

Completeness

There is a 'Use when' clause mentioning 'academic writing workflows that need structured execution, explicit assumptions, and clear output boundaries,' but the 'what does this do' part is essentially absent—it never explains what concrete outputs or actions the skill performs.

2 / 3

Trigger Term Quality

'Grant budget justification' and 'academic writing' are relevant domain keywords a user might mention, but the description lacks common variations like 'NSF budget', 'proposal budget', 'budget narrative', 'funding justification', or specific file/document types.

2 / 3

Distinctiveness Conflict Risk

'Grant budget justification' is a fairly specific niche, which helps, but the vague language about 'structured execution' and 'clear output boundaries' could overlap with any academic writing or planning skill.

2 / 3

Total

7

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is heavily padded with generic boilerplate (security checklists, lifecycle status, evaluation criteria, risk assessments) that is not specific to grant budget justification and wastes token budget. The domain-specific content is thin—there are no concrete examples of actual budget justification text, no agency-specific formatting rules, and no sample inputs/outputs that would help Claude produce quality results. The skill would benefit enormously from cutting 60%+ of the generic content and replacing it with actual grant budget justification examples and agency-specific guidance.

Suggestions

Remove generic boilerplate sections (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria) that Claude doesn't need and that aren't specific to budget justification—move to separate files if needed.

Add 1-2 complete input/output examples showing actual budget justification narrative text for different categories (e.g., equipment, personnel) with agency-specific formatting.

Eliminate circular references ('See ## Prerequisites above', 'See ## Workflow above') and consolidate duplicated content (the workflow appears in multiple places with slightly different wording).

Add concrete agency-specific rules (NIH vs NSF budget justification requirements, formatting differences, common compliance pitfalls) instead of generic process documentation.

DimensionReasoningScore

Conciseness

Extremely verbose and repetitive. Multiple sections reference each other circularly ('See ## Prerequisites above', 'See ## Workflow above'). The skill explains generic concepts Claude already knows (error handling patterns, security checklists, lifecycle status, evaluation criteria). Much of the content is boilerplate that adds no domain-specific value for grant budget justification. The actual task-specific content (budget justification for equipment, personnel, etc.) is buried under layers of generic process documentation.

1 / 3

Actionability

The Parameters table and CLI commands are concrete and useful. However, the core task—writing budget justifications—lacks executable examples. The 'Example' section is extremely thin ('Input: $50,000 for mass spectrometer / Output: Justification emphasizing essentiality and cost-sharing') with no actual sample output text. The scripts/main.py is referenced but no actual code or sample justification text is provided to show what good output looks like.

2 / 3

Workflow Clarity

The Workflow section provides a numbered sequence and the Example Usage section has a run plan, but both are generic and not specific to grant budget justification. There's no validation checkpoint specific to the domain (e.g., checking compliance with agency requirements, verifying budget categories). The error handling section mentions fallbacks but they're abstract rather than concrete steps.

2 / 3

Progressive Disclosure

There is a reference to references/audit-reference.md and the references/ directory, which is good. However, the SKILL.md itself is a monolithic wall of text with many sections that could be split out (Risk Assessment, Security Checklist, Evaluation Criteria, Lifecycle Status are all inline). The document is over-structured with too many sections at the top level rather than being a concise overview pointing to detailed materials.

2 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.