When the user needs to design an interview process, create interview questions, build scorecards, calibrate interviewers, or evaluate candidates for a role.
83
80%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/interview-kit/SKILL.mdA complete interview kit document containing:
Each score level must include 1-2 concrete behavioral anchors specific to the role being evaluated.
Structure behavioral questions to elicit complete, pattern-revealing answers:
Example: "Tell me about a time you had to ship something with significant technical debt. What was the situation, what did you decide, and how did it play out? Would you make the same call again?"
Use three inputs to triangulate:
Always present comp as a range with a target midpoint, not a single number.
job-description — Use the JD's competency requirements as input for the interview loopsourcing-outreach — Align outreach messaging with the interview process so candidates know what to expectPrompt: "Design an interview loop for a senior backend engineer. 15-person startup."
Good output snippet:
## Interview Loop — Senior Backend Engineer
### Competencies to Evaluate
1. System design & architecture (technical depth)
2. Code quality & testing practices (craft)
3. Debugging & production thinking (operational maturity)
4. Technical communication (collaboration)
5. Ownership & initiative (startup fit)
### Stage 1: Founder Screen (30 min)
- Evaluate: Motivation, communication, logistics
- Questions:
- "What's drawing you to an early-stage company right now?"
- "Walk me through the most impactful project you led in the last year."
- Scorecard: 1-4 on communication, motivation, startup-fit
### Stage 2: Technical Deep-Dive (60 min)
- Evaluate: System design, code quality
- Format: Live system design discussion + code review exercise
- Scorecard: 1-4 on architecture thinking, code craft, trade-off reasoningPrompt: "Our interviewers keep disagreeing on candidates."
Good output snippet:
This usually means you lack structured evaluation criteria. Three-step fix:
1. Define 4-5 competencies per role with written behavioral descriptions
2. Give each interviewer a scorecard to fill out independently BEFORE debrief
3. In the debrief, discuss only scores that diverge by 2+ points —
focus on evidence, not impressions
The goal is calibrated, evidence-based evaluation — not consensus.4ad31b4
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.