Use grant budget justification for academic writing workflows that need structured execution, explicit assumptions, and clear output boundaries.
49
37%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/Academic Writing/grant-budget-justification/SKILL.mdQuality
Discovery
40%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description fails to communicate what the skill actually does - it describes process qualities ('structured execution', 'explicit assumptions') rather than concrete capabilities. While it includes the domain term 'grant budget justification', it lacks actionable detail about the outputs or specific tasks it performs, making it difficult for Claude to confidently select this skill.
Suggestions
Replace abstract process descriptors with concrete actions (e.g., 'Writes budget justification narratives, calculates personnel and equipment costs, formats cost breakdowns for grant proposals').
Expand trigger terms to include common grant-related keywords like 'NSF', 'NIH', 'research proposal', 'funding request', 'personnel costs', 'indirect costs', 'F&A rates'.
Make the 'Use when' clause more specific with concrete triggers (e.g., 'Use when the user needs to write or review budget justifications for grant applications, explain research costs, or format proposal budgets').
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses vague language like 'structured execution', 'explicit assumptions', and 'clear output boundaries' without describing any concrete actions. It doesn't specify what the skill actually does (e.g., 'writes budget narratives', 'calculates cost breakdowns', 'formats justification sections'). | 1 / 3 |
Completeness | The description has a 'Use when' clause addressing when to use it ('academic writing workflows'), but the 'what' is extremely weak - it doesn't explain what the skill actually does beyond vague process descriptors. The 'when' guidance is also abstract rather than concrete. | 2 / 3 |
Trigger Term Quality | Contains some relevant keywords like 'grant', 'budget justification', and 'academic writing' that users might naturally say. However, it's missing common variations like 'NSF', 'NIH', 'research proposal', 'funding', 'personnel costs', or 'indirect costs'. | 2 / 3 |
Distinctiveness Conflict Risk | The phrase 'grant budget justification' provides some specificity, but 'academic writing workflows' is broad and could overlap with other academic writing skills. The abstract descriptors like 'structured execution' don't help distinguish it. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill suffers from severe verbosity with template boilerplate that adds no value (Risk Assessment tables, Security Checklists, Lifecycle Status). The core task—generating budget justifications—lacks concrete examples showing actual input JSON/CSV format and expected narrative output. The skill reads like an auto-generated template rather than actionable guidance for Claude.
Suggestions
Remove generic boilerplate sections (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria) that don't provide task-specific guidance
Add a concrete, complete example showing actual budget item input (JSON/CSV format) and the corresponding narrative justification output
Replace abstract workflow steps with specific actions: show what validation of budget items looks like, what agency-specific requirements to check
Consolidate redundant sections—'Implementation Details', 'Workflow', 'Example Usage', and 'Error Handling' overlap significantly and should be merged into a single clear workflow
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with excessive boilerplate, redundant sections (e.g., 'See ## Prerequisites above' when it's below), and template-like content that doesn't add value. Much of the content (Risk Assessment, Security Checklist, Lifecycle Status) is generic filler that Claude doesn't need. | 1 / 3 |
Actionability | Provides some concrete commands (python -m py_compile, --help flags) and a parameter table, but lacks actual executable code showing how to generate budget justifications. The 'Example' section is vague ('Input: $50,000 for mass spectrometer → Output: Justification emphasizing essentiality') without showing real input/output formats. | 2 / 3 |
Workflow Clarity | The Workflow section provides a 5-step sequence with validation concepts, but steps are abstract ('Confirm the user objective') rather than concrete. Missing explicit validation checkpoints for the actual budget justification process - no feedback loop for reviewing generated justifications against agency requirements. | 2 / 3 |
Progressive Disclosure | References external files (references/audit-reference.md, scripts/main.py) appropriately, but the main document is bloated with sections that should either be removed or moved to reference files. The structure exists but content organization is poor with redundant cross-references to non-existent sections. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
4a48721
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.