CtrlK
BlogDocsLog inGet started
Tessl Logo

interview-mock-partner

Simulates behavioral interview questions for medical professionals.

45

1.58x
Quality

16%

Does it follow best practices?

Impact

98%

1.58x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Academic Writing/interview-mock-partner/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies a clear niche (behavioral interviews for medical professionals) but is too terse to be effective for skill selection. It lacks a 'Use when...' clause, specific concrete actions beyond 'simulates,' and natural trigger term variations that users would employ when seeking this type of help.

Suggestions

Add a 'Use when...' clause with explicit triggers, e.g., 'Use when the user asks for mock interviews, interview practice, or behavioral question prep for healthcare, nursing, physician, or medical residency roles.'

List specific concrete actions such as 'Generates STAR-format behavioral interview questions, provides sample answers, evaluates user responses, and offers feedback tailored to medical professional roles.'

Include natural keyword variations users might say: 'mock interview,' 'interview prep,' 'healthcare interview,' 'nursing interview,' 'residency interview,' 'STAR method.'

DimensionReasoningScore

Specificity

Names the domain (behavioral interview questions for medical professionals) and one action (simulates), but doesn't list specific concrete actions like generating questions, providing feedback, scoring responses, or offering sample answers.

2 / 3

Completeness

Describes what it does (simulates behavioral interview questions) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, missing 'Use when' caps completeness at 2, and the 'what' is also thin, warranting a 1.

1 / 3

Trigger Term Quality

Includes some relevant keywords like 'behavioral interview questions' and 'medical professionals,' but misses common variations users might say such as 'mock interview,' 'practice interview,' 'healthcare,' 'nursing,' 'doctor interview prep,' 'STAR method,' or 'residency interview.'

2 / 3

Distinctiveness Conflict Risk

The combination of 'behavioral interview' and 'medical professionals' provides some distinctiveness, but it could overlap with general interview prep skills or medical education skills without clearer scoping or trigger terms.

2 / 3

Total

7

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is essentially an empty template with no actionable content. It describes a tool for simulating medical interview questions but provides zero concrete guidance—no example questions, no interview frameworks, no sample interactions, and no workflow. The majority of the content is boilerplate (risk assessment, security checklist, lifecycle status) that wastes tokens without adding value.

Suggestions

Replace boilerplate sections with actual interview content: example behavioral questions for medical positions, sample STAR-method responses, and specific feedback criteria.

Add a clear workflow: e.g., 1) Select position/specialty, 2) Generate questions using provided question bank, 3) Evaluate responses against criteria, 4) Provide structured feedback.

Include concrete examples of questions and model answers for at least one medical specialty to make the skill actionable.

Remove the Risk Assessment, Security Checklist, Evaluation Criteria, and Lifecycle Status sections—these are template noise that consume tokens without helping Claude perform the task.

DimensionReasoningScore

Conciseness

The skill is heavily padded with boilerplate sections (Risk Assessment, Security Checklist, Evaluation Criteria, Lifecycle Status) that add no actionable value. The features list is vague filler. Much of the content is template noise rather than useful instruction.

1 / 3

Actionability

There is no concrete guidance on how to actually simulate interview questions. No example questions, no sample interactions, no executable code, no specific behavioral frameworks (like STAR method). The skill describes what it does but never instructs how to do it.

1 / 3

Workflow Clarity

There is no workflow whatsoever—no steps for conducting the mock interview, no sequence for generating questions, providing feedback, or iterating. The skill is purely declarative with no process guidance.

1 / 3

Progressive Disclosure

The content is a monolithic block of boilerplate sections with no meaningful structure for discovery. There are no references to supplementary files, and the sections that exist (Risk Assessment, Security Checklist, Lifecycle Status) are irrelevant template filler rather than organized content.

1 / 3

Total

4

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.