Generate NIH Biosketch documents compliant with the 2022 OMB-approved.
44
31%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/Academic Writing/nih-biosketch-builder/SKILL.mdQuality
Discovery
40%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a clear and distinctive niche (NIH Biosketch generation) but is too terse. It lacks a 'Use when...' clause, lists only one action, and misses common trigger terms users might employ when requesting this type of document. The sentence also appears grammatically incomplete ('2022 OMB-approved' what?).
Suggestions
Add a 'Use when...' clause with trigger terms like 'biosketch', 'NIH grant application', 'biographical sketch', 'eRA Commons', 'grant submission'.
List specific concrete actions such as 'format personal statement, list positions and honors, compile contributions to science, populate training and mentoring sections'.
Fix the incomplete phrase '2022 OMB-approved' to specify what it refers to (e.g., '2022 OMB-approved format') and add common user-facing synonyms.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (NIH Biosketch) and one action (generate), and mentions compliance with 2022 OMB-approved format, but does not list multiple concrete actions like filling sections, formatting contributions, or managing publications. | 2 / 3 |
Completeness | Describes what it does (generate NIH Biosketch documents) but has no explicit 'Use when...' clause or trigger guidance, which per the rubric caps completeness at 2, and the 'what' itself is also thin, warranting a score of 1. | 1 / 3 |
Trigger Term Quality | Includes 'NIH Biosketch' and '2022 OMB-approved' which are relevant domain terms, but misses common variations users might say like 'CV', 'grant application', 'biographical sketch', 'eRA Commons', or 'NIH grant'. | 2 / 3 |
Distinctiveness Conflict Risk | NIH Biosketch is a very specific document type with a clear niche; it is unlikely to conflict with other skills since the domain is narrow and well-defined. | 3 / 3 |
Total | 8 / 12 Passed |
Implementation
22%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is heavily padded with generic boilerplate sections (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria, Response Template) that are not specific to NIH biosketch generation and waste significant token budget. The NIH-specific content (format requirements, JSON schema, PubMed import commands) is genuinely useful but buried among redundant and generic material. The workflow lacks any biosketch-specific validation steps, which is critical for a compliance-focused document generation task.
Suggestions
Remove all generic boilerplate sections (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria, Prerequisites, Response Template, Output Requirements) that don't contain NIH-biosketch-specific information—this would cut the file by ~60%.
Replace the generic workflow with NIH-biosketch-specific steps including validation checkpoints: verify page count ≤5, check all 4 required sections are present, validate font/margin compliance in the generated DOCX.
Remove redundant cross-references ('See ## Usage above', 'See ## Workflow above') and consolidate into a single logical flow: format requirements → input schema → CLI commands → validation.
Add a concrete example showing a minimal complete input JSON and the expected output document structure, rather than the generic 'Standard input → Expected output' test case.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with massive redundancy. Multiple sections repeat the same information (e.g., 'See ## Usage above' and 'See ## Workflow above' cross-references to sections that appear later). Includes boilerplate sections like Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria, and Prerequisites that add no actionable value. Generic workflow steps and output requirements that Claude already knows how to do. The 'Prerequisites' section contradicts the 'Dependencies' section. | 1 / 3 |
Actionability | Provides concrete CLI commands, a complete JSON input schema, and specific format requirements (fonts, margins, page limits). However, the actual script implementation is not shown—we're told to run scripts/main.py but never see what it does. Many sections are generic boilerplate ('Standard input → Expected output') rather than specific executable guidance. The PubMed auto-import example is concrete and useful. | 2 / 3 |
Workflow Clarity | The workflow section is entirely generic ('Confirm the user objective, required inputs...') with no NIH-biosketch-specific steps. There are no validation checkpoints for the generated document (e.g., checking page count ≤5, font compliance, section completeness). The 'Example run plan' is also generic. For a document generation task involving specific format compliance, the absence of validation/verification steps for the output is a significant gap. | 1 / 3 |
Progressive Disclosure | References to external files exist (references/audit-reference.md, scripts/main.py) and the content is organized with headers. However, the skill itself is monolithic with too much inline content that could be separated (Risk Assessment, Security Checklist, Evaluation Criteria are all inline boilerplate). The cross-references ('See ## Usage above') point to sections that appear later in the document, creating confusion. | 2 / 3 |
Total | 6 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
0b96148
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.