CtrlK
BlogDocsLog inGet started
Tessl Logo

nih-biosketch-builder

Generate NIH Biosketch documents compliant with the 2022 OMB-approved format

Install with Tessl CLI

npx tessl i github:aipoch/medical-research-skills --skill nih-biosketch-builder
What are skills?

61

Does it follow best practices?

Validation for skill structure

SKILL.md
Review
Evals

Discovery

40%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies a clear, specialized niche (NIH Biosketch generation) which provides excellent distinctiveness, but suffers from incomplete guidance. It lacks explicit trigger conditions and could benefit from listing more specific actions beyond just 'generate'. The technical terminology may miss users who describe their need in more casual terms.

Suggestions

Add a 'Use when...' clause with trigger terms like 'grant application', 'NIH grant', 'biosketch', 'research CV', 'funding proposal'

Expand specific actions: 'Generate NIH Biosketch documents including education/training sections, personal statements, and contributions to science'

Include natural language variations users might say: 'CV for NIH grant', 'academic biography', 'researcher profile for funding'

DimensionReasoningScore

Specificity

Names the domain (NIH Biosketch) and one action (generate), but doesn't list multiple concrete actions like 'format citations, organize contributions, structure education history'.

2 / 3

Completeness

Describes what it does (generate NIH Biosketch documents) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill.

1 / 3

Trigger Term Quality

Includes 'NIH Biosketch' and '2022 OMB-approved format' which are relevant but technical. Missing natural variations users might say like 'CV for grant', 'grant application CV', 'NSF biosketch', or 'research biography'.

2 / 3

Distinctiveness Conflict Risk

Very clear niche - NIH Biosketch with specific 2022 OMB format is highly distinctive and unlikely to conflict with other document generation skills.

3 / 3

Total

8

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill provides solid actionable guidance with executable commands and a complete JSON schema, making it practically useful. However, it's bloated with boilerplate sections (Risk Assessment, Security Checklist, Evaluation Criteria, Lifecycle Status) that add no instructional value and waste tokens. The workflow lacks validation steps for verifying generated documents meet NIH requirements.

Suggestions

Remove boilerplate sections (Risk Assessment, Security Checklist, Evaluation Criteria, Lifecycle Status, Prerequisites) that don't provide actionable guidance for document generation

Add validation step after document generation: e.g., 'Verify output: Check page count ≤5, confirm all 4 sections present, validate font/margin compliance'

Add error handling guidance for PubMed API failures (timeout, invalid PMID, rate limiting)

Consolidate the contradictory Prerequisites section ('No additional Python packages required') with the Dependencies section that lists python-docx and requests

DimensionReasoningScore

Conciseness

The skill contains significant boilerplate that doesn't add value (Risk Assessment table, Security Checklist, Evaluation Criteria, Lifecycle Status sections). The core content is reasonably efficient, but nearly half the document is template filler that Claude doesn't need.

2 / 3

Actionability

Provides fully executable command-line examples, complete JSON input schema with realistic example data, and specific pip install commands. The code examples are copy-paste ready and cover both basic and advanced usage (PubMed auto-import).

3 / 3

Workflow Clarity

Commands are listed but lack validation checkpoints. For a document generation workflow involving external API calls and file creation, there's no guidance on verifying output correctness, handling API failures, or validating the generated DOCX meets NIH requirements.

2 / 3

Progressive Disclosure

References external NIH documentation appropriately, but the skill itself is monolithic with boilerplate sections that should be removed rather than split. The core content structure (Format Requirements → Usage → Dependencies) is logical but buried among unnecessary sections.

2 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.