CtrlK
BlogDocsLog inGet started
Tessl Logo

consult

Simulates expert perspectives grounded in documented positions. Use when asking "what would [expert] say", "best practice", "panel", "debate", or needing domain guidance. Triggers on expert names, style requests, tradeoff questions, or "stuck on".

84

1.54x
Quality

83%

Does it follow best practices?

Impact

79%

1.54x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

SKILL.md
Quality
Evals
Security

Quality

Discovery

82%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description has strong trigger term coverage and good completeness with an explicit 'Use when' clause. Its main weaknesses are moderate specificity (it describes a general approach rather than listing concrete actions) and some conflict risk from broad trigger terms like 'best practice' and 'stuck on' that could match many other skills.

Suggestions

Add more specific concrete actions, e.g., 'Generates simulated expert panels, compares documented positions across thought leaders, synthesizes multi-perspective recommendations'

Narrow broad triggers like 'best practice' and 'stuck on' to reduce conflict risk, e.g., qualify them as 'best practice according to named experts' or 'stuck on a design tradeoff between competing approaches'

DimensionReasoningScore

Specificity

The description names the domain ('expert perspectives grounded in documented positions') and a general action ('simulates'), but doesn't list multiple concrete actions. It's more about the approach than specific capabilities like 'generates debate transcripts, compares expert opinions, synthesizes recommendations.'

2 / 3

Completeness

Clearly answers both 'what' (simulates expert perspectives grounded in documented positions) and 'when' (explicit 'Use when' clause with multiple trigger scenarios and terms). The when clause is detailed with specific trigger phrases.

3 / 3

Trigger Term Quality

Includes a strong set of natural trigger terms users would actually say: 'what would [expert] say', 'best practice', 'panel', 'debate', 'stuck on', 'tradeoff questions', 'style requests', and 'expert names'. These cover a good range of how users would naturally phrase such requests.

3 / 3

Distinctiveness Conflict Risk

While 'expert perspectives' and 'panel/debate' are somewhat distinctive, terms like 'best practice', 'domain guidance', and 'stuck on' are quite broad and could easily overlap with general advice, coding help, or decision-making skills. The niche is moderately clear but not sharply bounded.

2 / 3

Total

10

/

12

Passed

Implementation

85%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-crafted skill that provides clear, actionable instructions for simulating expert perspectives. Its strengths are the precise 4-step workflow with branching logic, concrete formatting specifications, and excellent progressive disclosure via external profile files. Minor verbosity in repeated constraints (especially around expert invisibility) prevents a perfect conciseness score, but overall the skill is highly effective.

DimensionReasoningScore

Conciseness

The skill is mostly efficient and avoids explaining concepts Claude already knows, but some sections are slightly verbose — the Presentation rules repeat the 'experts are invisible' constraint multiple times, and the detail panel format instructions could be tighter. The domain map table is appropriately dense.

2 / 3

Actionability

The skill provides highly concrete, specific guidance: exact formatting rules for detail panels (~40 chars, ALL CAPS headers, dashes for bullets), explicit forbidden patterns, a clear mode-detection table with expert counts and depth descriptions, and a precise detail panel template. Claude knows exactly what to produce.

3 / 3

Workflow Clarity

The 4-step workflow (Route → Reason → Present → Land) is clearly sequenced with explicit instructions at each step, including what NOT to output between steps. Step 4 provides clear branching logic for different user responses, creating effective feedback loops. The mode-specific reasoning in Step 2 adds appropriate checkpoints (e.g., swap experts if everyone agrees).

3 / 3

Progressive Disclosure

The skill references 74 external profile files in `profiles/` without inlining them, uses a compact domain map for routing, and mentions a blocklist config file. The SKILL.md itself serves as a clear overview with well-organized sections, keeping the main file focused on workflow and presentation rules while deferring expert-specific content to separate files.

3 / 3

Total

11

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
saadshahd/moo.md
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.