CtrlK
BlogDocsLog inGet started
Tessl Logo

nih-biosketch-builder

Generate NIH Biosketch documents compliant with the 2022 OMB-approved.

44

Quality

31%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Academic Writing/nih-biosketch-builder/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

40%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies a clear and distinctive niche (NIH Biosketch generation) but is too terse. It lacks a 'Use when...' clause, does not enumerate specific capabilities beyond 'generate,' and the sentence appears grammatically incomplete ('2022 OMB-approved' what?). Adding trigger guidance and more concrete actions would significantly improve skill selection accuracy.

Suggestions

Add a 'Use when...' clause with trigger terms like 'biosketch', 'NIH grant application', 'biographical sketch', 'eRA Commons', 'grant submission'.

List specific concrete actions such as 'format personal statement, list positions and honors, compile contributions to science, populate training and mentoring sections'.

Fix the incomplete phrase '2022 OMB-approved' — specify what it approves (e.g., '2022 OMB-approved format') and consider mentioning the specific form number (e.g., 'NOT-OD-21-073') for additional trigger clarity.

DimensionReasoningScore

Specificity

Names the domain (NIH Biosketch) and one action (generate), and mentions compliance with 2022 OMB-approved format, but does not list multiple concrete actions like filling sections, formatting contributions, or managing publications.

2 / 3

Completeness

Describes what it does (generate NIH Biosketch documents) but has no 'Use when...' clause or equivalent explicit trigger guidance, which per the rubric caps completeness at 2, and the 'what' itself is also thin, bringing it to 1.

1 / 3

Trigger Term Quality

Includes 'NIH Biosketch' and '2022 OMB-approved' which are relevant domain terms, but misses common variations users might say like 'CV', 'grant application', 'biographical sketch', 'eRA Commons', or 'NIH grant'.

2 / 3

Distinctiveness Conflict Risk

NIH Biosketch is a very specific document type with a clear niche; it is unlikely to conflict with other skills since the domain is narrow and well-defined.

3 / 3

Total

8

/

12

Passed

Implementation

22%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is heavily padded with boilerplate sections (risk assessment, security checklist, lifecycle status, evaluation criteria, response template) that are not specific to NIH biosketch generation and waste significant token budget. The actual domain-specific content—NIH format requirements and JSON input schema—is useful but buried among generic workflow instructions. The workflow lacks biosketch-specific validation steps (e.g., verifying page count, font compliance, section completeness) which are critical for NIH compliance.

Suggestions

Remove all boilerplate sections (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria, Response Template, Output Requirements) that don't contain biosketch-specific guidance—these waste tokens on things Claude already knows.

Replace the generic Workflow section with biosketch-specific steps including validation checkpoints: verify page count ≤5, check all required sections present, validate font/margin compliance in the generated DOCX.

Eliminate redundancy: merge duplicate References sections, remove contradictory Prerequisites vs Dependencies, remove circular cross-references ('See ## Usage above' pointing to content that appears later).

Add a concrete end-to-end example showing actual input data and the expected output structure/content of the generated biosketch DOCX, rather than just showing CLI invocation patterns.

DimensionReasoningScore

Conciseness

Extremely verbose with significant redundancy. Multiple sections repeat the same information (e.g., 'See ## Workflow above' and 'See ## Usage above' cross-references to sections that appear later), duplicate References sections, contradictory Prerequisites ('No additional Python packages required' vs Dependencies section listing python-docx and requests), boilerplate security checklists, risk assessments, lifecycle status, and evaluation criteria that add no actionable value for Claude. Much content explains things Claude already knows.

1 / 3

Actionability

Provides concrete CLI commands, a complete JSON input schema, and specific format requirements (fonts, margins, page limits). However, the actual script implementation is not shown—we're told to run scripts/main.py but never see what it does. The 'Example run plan' is generic and not specific to biosketch generation. Many sections are boilerplate rather than task-specific executable guidance.

2 / 3

Workflow Clarity

The Workflow section is entirely generic (confirm objective, validate request, use packaged script, return structured result) with no biosketch-specific steps. There are no validation checkpoints for the generated DOCX output (e.g., checking page count ≤5, font compliance, section completeness). The 'Example run plan' is similarly generic. No feedback loops for verifying NIH compliance of the output document.

1 / 3

Progressive Disclosure

References a references/ directory and scripts/main.py for deeper content, which is appropriate. However, the SKILL.md itself is a monolithic wall of text with many sections that should be condensed or removed (risk assessment, security checklist, lifecycle status, evaluation criteria, response template). The document structure exists but is poorly organized with redundant and misplaced sections.

2 / 3

Total

6

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.