CtrlK
BlogDocsLog inGet started
Tessl Logo

clarify

Adaptive thinking partner that helps clarify, challenge, and refine ideas through persistent questioning. Auto-detects domain (product, architecture, debugging, process, general) and user mode (exploring, deciding, refining) to adapt question style. Actively pushes back on weak reasoning — flags contradictions, challenges assumptions, stress-tests claims. Produces context-appropriate artifacts when done (design doc, hypothesis list, decision matrix, or key insights). Use this skill when: (1) brainstorming or exploring an idea before implementation, (2) requirements are vague and need clarification, (3) making architectural or product decisions, (4) debugging and need to form hypotheses, (5) refining an approach that's mostly decided. Triggers on: 'brainstorm', 'clarify', 'think through', 'explore', 'help me figure out', 'what should I consider', 'let's think about', 'what could go wrong', 'help me decide'.

87

Quality

84%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

SKILL.md
Quality
Evals
Security

Quality

Discovery

92%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a well-crafted description that excels in specificity, trigger term coverage, and completeness. It clearly articulates what the skill does, when to use it, and includes natural trigger phrases. The main weakness is its broad, cross-domain scope which could cause it to conflict with more specialized skills in areas like architecture, debugging, or product decisions.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: clarify/challenge/refine ideas through questioning, auto-detects domain and user mode, pushes back on weak reasoning, flags contradictions, challenges assumptions, stress-tests claims, and produces specific artifact types (design doc, hypothesis list, decision matrix, key insights).

3 / 3

Completeness

Clearly answers both 'what does this do' (adaptive thinking partner that clarifies, challenges, refines ideas, produces artifacts) AND 'when should Claude use it' with an explicit 'Use this skill when:' clause listing five scenarios plus a 'Triggers on:' section with nine trigger phrases.

3 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms users would actually say: 'brainstorm', 'clarify', 'think through', 'explore', 'help me figure out', 'what should I consider', 'let's think about', 'what could go wrong', 'help me decide'. These are highly natural phrases.

3 / 3

Distinctiveness Conflict Risk

While the description is detailed, terms like 'brainstorm', 'explore', 'think through', and 'help me decide' are quite broad and could overlap with many other skills (e.g., general coding assistance, product management skills, architecture skills). The skill's scope spans multiple domains (product, architecture, debugging, process) which increases conflict risk.

2 / 3

Total

11

/

12

Passed

Implementation

77%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a strong, well-crafted skill that provides genuinely actionable guidance for an adaptive thinking partner. Its main weakness is length — the extensive question arsenals and example phrasings, while useful, make it token-heavy for every conversation where it's loaded. The workflow is exceptionally clear with good phase transitions and behavioral guardrails.

Suggestions

Extract the Question Arsenal section into a separate QUESTIONS.md reference file and link to it, keeping only 1-2 examples per category inline

Condense the artifact templates into a separate ARTIFACTS.md with full details, keeping only the selection table in the main skill

DimensionReasoningScore

Conciseness

The skill is well-written and avoids explaining concepts Claude already knows, but it's quite long (~200 lines) with extensive question arsenals and examples that could be tightened. The question examples are useful but verbose — many could be condensed into patterns rather than enumerated. The anti-patterns table adds value but some entries are obvious.

2 / 3

Actionability

Highly actionable throughout — provides specific opening questions per domain, concrete examples of good vs bad challenge phrasing, explicit artifact templates per domain, and clear behavioral rules. The question arsenal gives copy-paste-ready prompts, and the anti-patterns table provides concrete fixes.

3 / 3

Workflow Clarity

The four-phase workflow (Detect → Establish → Deepen/Challenge → Artifact) is clearly sequenced with explicit transitions. Cadence guidance (1 question early, 2-3 later) and mode re-detection create natural checkpoints. Stop signals are explicitly defined, and the anti-patterns table serves as a validation checklist against common failure modes.

3 / 3

Progressive Disclosure

The content is well-structured with clear headers and tables, but it's entirely monolithic — all content lives in one file. The question arsenal, artifact templates, and anti-patterns could each be separate reference files linked from a leaner overview. For a skill this long, splitting would improve token efficiency.

2 / 3

Total

10

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
mayank-arora/agent-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.