CtrlK
BlogDocsLog inGet started
Tessl Logo

deep-interview

Socratic deep interview with mathematical ambiguity gating before autonomous execution

46

Quality

35%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/deep-interview/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

7%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is highly abstract and jargon-laden, failing to communicate concrete capabilities or use cases. It reads like internal technical terminology rather than a functional description that would help Claude select the right skill. Without understanding what actions this skill performs or when to use it, Claude cannot reliably choose this skill.

Suggestions

Replace abstract jargon with concrete actions (e.g., 'Asks clarifying questions to understand requirements before executing complex tasks').

Add an explicit 'Use when...' clause with natural trigger terms users would actually say (e.g., 'Use when the user's request is ambiguous or requires clarification before proceeding').

Specify the domain or task type this skill applies to (e.g., code generation, data analysis, research) to help distinguish it from other skills.

DimensionReasoningScore

Specificity

The description uses abstract, jargon-heavy language ('Socratic deep interview', 'mathematical ambiguity gating', 'autonomous execution') without explaining concrete actions. No specific capabilities are listed.

1 / 3

Completeness

Missing both clear 'what' and 'when'. There is no 'Use when...' clause, and the description fails to explain what the skill actually does or when Claude should select it.

1 / 3

Trigger Term Quality

Contains technical jargon that users would never naturally say. Terms like 'mathematical ambiguity gating' and 'Socratic deep interview' are not natural user language for any common task.

1 / 3

Distinctiveness Conflict Risk

The unusual terminology ('Socratic deep interview', 'mathematical ambiguity gating') is distinctive enough to avoid conflicts with common skills, but it's unclear what domain this serves, making proper selection difficult.

2 / 3

Total

5

/

12

Passed

Implementation

62%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides exceptionally detailed and actionable guidance for conducting Socratic interviews with mathematical ambiguity scoring. The workflow is well-structured with clear validation gates and feedback loops. However, the skill is severely bloated—explaining concepts Claude already knows, repeating the 3-stage pipeline explanation multiple times, and including extensive rationale sections that don't add operational value. The content would be equally effective at 30-40% of its current length.

Suggestions

Remove the <Why_This_Exists> section entirely—Claude doesn't need motivation for following instructions

Consolidate the 3-stage pipeline explanation to a single location instead of repeating it in Steps, Examples, and Advanced sections

Move the <Advanced> configuration, integration details, and interpretation tables to a separate DEEP-INTERVIEW-REFERENCE.md file

Cut explanatory prose like 'The context window is a public good' and 'AI can build anything. The hard part is knowing what to build'—these waste tokens without adding actionable guidance

DimensionReasoningScore

Conciseness

Extremely verbose at ~500+ lines with extensive explanations of concepts Claude already understands (what Socratic questioning is, why clarity matters, detailed rationale sections). The <Why_This_Exists> section, extensive pipeline diagrams, and repeated explanations of the same concepts (3-stage pipeline explained multiple times) waste significant tokens.

1 / 3

Actionability

Provides highly concrete, executable guidance: specific JSON state structures, exact scoring formulas with weights, complete spec file templates, precise prompt injections for challenge modes, and detailed question targeting strategies with examples. The scoring calculations and file output formats are copy-paste ready.

3 / 3

Workflow Clarity

Excellent multi-phase workflow with explicit validation gates (ambiguity threshold ≤ 0.2), clear sequencing (Phase 1-5), feedback loops (score after every answer, challenge modes at specific rounds), and explicit checkpoints (soft warning at round 10, hard cap at 20). The interview loop has clear entry/exit conditions.

3 / 3

Progressive Disclosure

Content is structured with phases and sections, but everything is inline in one massive file. The <Advanced> section contains configuration and integration details that could be separate files. No external references are provided despite the complexity warranting them (e.g., separate files for spec templates, scoring rubrics, or integration guides).

2 / 3

Total

9

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (627 lines); consider splitting into references/ and linking

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
Yeachan-Heo/oh-my-claudecode
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.