Predict funding trend shifts using NLP analysis of grant abstracts from NIH, NSF, and Horizon Europe
54
34%
Does it follow best practices?
Impact
84%
4.42xAverage score across 3 eval scenarios
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/Academic Writing/funding-trend-forecaster/SKILL.mdQuality
Discovery
54%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a clear, specialized niche with good domain-specific terminology that researchers would naturally use. However, it lacks explicit trigger guidance ('Use when...') and could benefit from listing more concrete actions beyond just 'predict'. The specificity of funding agencies provides strong distinctiveness but the incomplete guidance on when to invoke the skill is a significant weakness.
Suggestions
Add a 'Use when...' clause with trigger scenarios like 'Use when analyzing grant trends, exploring funding opportunities, or researching NIH/NSF/Horizon Europe priorities'
Expand the action verbs to be more specific, e.g., 'Analyze grant abstracts, identify emerging research themes, forecast funding priorities, and track topic trends across NIH, NSF, and Horizon Europe'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (funding trends, grant abstracts) and a specific technique (NLP analysis), but doesn't list multiple concrete actions beyond 'predict'. Missing details on what outputs are produced or what specific analyses are performed. | 2 / 3 |
Completeness | Describes what it does (predict funding trends via NLP) but completely lacks a 'Use when...' clause or any explicit trigger guidance. Per rubric guidelines, missing explicit trigger guidance caps completeness at 2, and this has no 'when' component at all. | 1 / 3 |
Trigger Term Quality | Includes strong natural keywords users would say: 'funding', 'grant abstracts', 'NIH', 'NSF', 'Horizon Europe', 'NLP analysis'. These are specific terms researchers and grant professionals would naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive niche combining funding prediction, specific agencies (NIH, NSF, Horizon Europe), and grant abstracts. Unlikely to conflict with other skills due to the specialized domain focus. | 3 / 3 |
Total | 9 / 12 Passed |
Implementation
14%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill reads more like a product README or marketing document than actionable instructions for Claude. It's bloated with boilerplate sections (roadmap, license, risk assessment, lifecycle status) while lacking the actual implementation code it references. The skill describes what a tool would do rather than teaching Claude how to perform the task.
Suggestions
Remove all boilerplate sections (roadmap, license, risk assessment, lifecycle status, evaluation criteria) that don't provide actionable guidance
Either provide the actual implementation code for the collectors/analyzers/predictors, or restructure as instructions for Claude to build these components step-by-step
Add explicit validation checkpoints for API calls (rate limits, authentication failures, malformed responses) and data processing steps
Split detailed configuration schemas and output formats into separate reference files, keeping SKILL.md focused on the core workflow
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with extensive boilerplate (roadmap, license info, risk assessment tables, lifecycle status) that adds no instructional value. Explains basic concepts Claude already knows and includes marketing-style feature lists instead of actionable guidance. | 1 / 3 |
Actionability | Provides CLI commands and Python API examples that appear executable, but the code references non-existent scripts (scripts/main.py, FundingTrendForecaster class) without providing actual implementations. The examples are illustrative rather than truly copy-paste ready. | 2 / 3 |
Workflow Clarity | No clear multi-step workflow with validation checkpoints. The skill involves network requests, data processing, and file operations but lacks any error handling guidance, validation steps, or feedback loops for when API calls fail or data is malformed. | 1 / 3 |
Progressive Disclosure | Monolithic wall of text with no references to external documentation. Contains inline content that should be separate (full architecture diagram, config examples, output schemas) while lacking actual implementation details that would need their own files. | 1 / 3 |
Total | 5 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
4a48721
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.