CtrlK
BlogDocsLog inGet started
Tessl Logo

grant-mock-reviewer

Simulates NIH study section peer review for grant proposals. Triggers.

44

Quality

31%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Academic Writing/grant-mock-reviewer/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

40%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies a clear and distinctive niche (NIH grant peer review simulation) but is critically incomplete. The truncated 'Triggers.' fragment suggests the description was cut off or left unfinished, leaving no explicit guidance on when Claude should select this skill. Adding concrete actions and a proper 'Use when...' clause would significantly improve it.

Suggestions

Complete the truncated 'Triggers.' fragment with an explicit 'Use when...' clause, e.g., 'Use when the user asks for feedback on a grant proposal, NIH review, study section critique, or grant scoring.'

Add specific concrete actions such as 'Evaluates significance, innovation, approach, investigators, and environment; assigns preliminary scores; generates summary statements in NIH format.'

Include natural trigger terms users would say, such as 'R01', 'grant application', 'specific aims page', 'grant critique', 'fundability', and 'study section feedback'.

DimensionReasoningScore

Specificity

Names the domain (NIH study section peer review) and one action (simulates review for grant proposals), but does not list multiple specific concrete actions like scoring, critiquing significance, evaluating methodology, etc.

2 / 3

Completeness

The 'what' is partially addressed (simulates peer review), but the 'when' clause is entirely missing — 'Triggers.' appears to be a truncated or placeholder fragment with no actual trigger guidance, which per the rubric should cap completeness at 2, and since it's essentially absent, it scores 1.

1 / 3

Trigger Term Quality

Includes relevant keywords like 'NIH', 'study section', 'peer review', and 'grant proposals' which users in academia might naturally use, but missing common variations like 'R01', 'grant application', 'specific aims', 'critique', or 'score'. The word 'Triggers.' is incomplete/meaningless.

2 / 3

Distinctiveness Conflict Risk

The description targets a very specific niche — NIH study section peer review for grant proposals — which is unlikely to conflict with other skills. The domain is narrow and well-defined.

3 / 3

Total

8

/

12

Passed

Implementation

22%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is heavily padded with generic boilerplate template content (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria) that provides no skill-specific value and wastes significant token budget. While the NIH-specific content (scoring rubric, common weaknesses, review output format) is genuinely useful domain knowledge, it's buried in repetitive generic sections and competing workflow descriptions. The skill would benefit enormously from cutting 60%+ of the content and focusing on the actual grant review process.

Suggestions

Remove all generic boilerplate sections (Risk Assessment, Security Checklist, Evaluation Criteria, Lifecycle Status, Input Validation, Response Template, Output Requirements) that add no grant-review-specific value and consume ~40% of the token budget.

Consolidate the three competing workflow descriptions (Example Usage run plan, Workflow section, Implementation Details) into a single clear step-by-step process with explicit validation checkpoints for the review output.

Move the detailed NIH scoring tables and Common Weaknesses catalog into reference files (they're already listed in References) and keep only a concise summary inline.

Remove self-referential broken cross-references like 'See ## Prerequisites above' and 'See ## Usage above' that point to sections appearing later in the document.

DimensionReasoningScore

Conciseness

Extremely verbose at ~300+ lines with massive amounts of boilerplate, redundant sections, and content Claude already knows. Sections like 'When to Use' repeat the description verbatim, 'Key Features' is generic filler, cross-references to non-existent sections ('See ## Prerequisites above'), and there are entire sections of generic template content (Risk Assessment, Security Checklist, Evaluation Criteria, Lifecycle Status) that add no skill-specific value.

1 / 3

Actionability

The CLI examples and library usage code are concrete and specific, and the parameter table is well-defined. However, much of the 'actionable' content is likely fictional—the script probably doesn't exist with all those flags, and the library import pattern is aspirational rather than verified. The core review workflow itself (how to actually perform the critique) relies entirely on running a script rather than providing executable guidance.

2 / 3

Workflow Clarity

There are multiple competing workflow sections ('Example Usage' run plan, 'Workflow' section, 'Implementation Details') that are all generic and vague. None provide clear validation checkpoints specific to grant review. Steps like 'Validate that the request matches the documented scope' are abstract platitudes. The actual NIH review process workflow (score, critique, generate statement) is never sequenced as a clear step-by-step process.

1 / 3

Progressive Disclosure

References to files in `references/` directory are well-signaled and one level deep, which is good. However, the main document is a monolithic wall of text that includes enormous amounts of inline content (Common Weaknesses Detected, NIH Scoring System, full parameter tables) that should be in reference files. The document tries to be both overview and comprehensive reference simultaneously.

2 / 3

Total

6

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.