CtrlK
BlogDocsLog inGet started
Tessl Logo

residency-interview-prep

Mock interview preparation tool for residency Match interviews. Generates.

34

Quality

18%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Academic Writing/residency-interview-prep/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

22%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is critically incomplete—the sentence is truncated at 'Generates.' leaving the reader without knowledge of what is actually generated. It lacks a 'Use when...' clause, has minimal trigger terms, and fails to articulate the full scope of capabilities. The residency Match interview domain provides some distinctiveness, but the description is too broken to be functional for skill selection.

Suggestions

Complete the truncated sentence to specify what is generated (e.g., 'Generates practice interview questions, sample answers, and feedback for residency Match interviews').

Add an explicit 'Use when...' clause with trigger terms like 'mock interview', 'residency interview prep', 'Match Day', 'practice questions', 'medical residency', 'behavioral interview'.

List specific concrete actions the skill performs (e.g., 'Simulates interviewer questions, provides feedback on responses, covers common residency interview topics like MMI scenarios and behavioral questions').

DimensionReasoningScore

Specificity

The description mentions 'mock interview preparation' and 'generates' but the sentence is incomplete ('Generates.' with a period and no object). It names a domain but fails to describe concrete actions comprehensively.

1 / 3

Completeness

The 'what' is incomplete (the sentence cuts off at 'Generates.') and there is no 'when' clause or explicit trigger guidance whatsoever. The missing 'Use when...' clause alone would cap this at 2, but the truncated capability description drops it to 1.

1 / 3

Trigger Term Quality

Contains some relevant keywords like 'mock interview', 'residency', and 'Match interviews' that users in this domain might naturally use, but is missing common variations like 'practice questions', 'interview prep', 'ERAS', 'medical residency', or 'behavioral questions'.

2 / 3

Distinctiveness Conflict Risk

The combination of 'residency Match interviews' provides some niche specificity that distinguishes it from generic interview prep skills, but the incomplete description and lack of clear triggers could still cause overlap with other interview or medical education skills.

2 / 3

Total

6

/

12

Passed

Implementation

14%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is overwhelmingly boilerplate with very little domain-specific content about residency interview preparation. The actual useful content (Features, Input Parameters, Output Format) comprises perhaps 15% of the document, while the rest is generic scaffolding about error handling, security checklists, lifecycle status, and risk assessment that Claude doesn't need. The self-referential 'See above' links and duplicated workflow descriptions make the document confusing and wasteful.

Suggestions

Remove all generic boilerplate sections (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria, Output Requirements, Response Template) and focus on the domain-specific interview prep content.

Add concrete examples of generated interview questions with sample STAR-format responses for at least 2-3 question types (behavioral, clinical, ethical).

Provide a specific workflow for conducting a mock interview session: e.g., 1) Select question type and specialty, 2) Generate question, 3) User responds, 4) Evaluate response against STAR criteria, 5) Provide specific feedback.

Eliminate self-referential loops ('See ## Features above') and consolidate duplicated content into a single, well-organized flow.

DimensionReasoningScore

Conciseness

Extremely verbose and repetitive. Contains numerous sections that add no value (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria with generic test cases). Multiple self-referential loops ('See ## Features above', 'See ## Prerequisites above', 'See ## Workflow above') that waste tokens. Generic boilerplate dominates over domain-specific content. The actual residency interview prep content (Features, Input Parameters, Output Format) is buried under layers of unnecessary scaffolding.

1 / 3

Actionability

The Input Parameters table, Output Format JSON schema, and Features list provide some concrete guidance. However, the actual interview question generation logic is never shown—there's no example of a generated question, no sample STAR-format response, and the 'scripts/main.py' is referenced but never demonstrated with real domain-specific usage. The bash commands are concrete but generic (py_compile, --help).

2 / 3

Workflow Clarity

The workflow section is entirely generic ('Confirm the user objective', 'Validate that the request matches the documented scope') with no residency-interview-specific steps. There are no validation checkpoints specific to interview prep quality. The 'Example run plan' is also generic boilerplate. No feedback loop for evaluating response quality or iterating on interview answers.

1 / 3

Progressive Disclosure

The document is a monolithic wall of text with 15+ sections, many of which are redundant or self-referential. References like 'See ## Features above' and 'See ## Prerequisites above' point to sections within the same document, creating confusion rather than clarity. The references/ directory is mentioned but never described. No clear hierarchy between essential and supplementary content.

1 / 3

Total

5

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.