CtrlK
BlogDocsLog inGet started
Tessl Logo

create-command

Interactive assistant for creating new Claude commands with proper structure, patterns, and MCP tool integration

42

Quality

30%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/customaize-agent/skills/create-command/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies its domain (Claude command creation) and mentions some relevant technical concepts, but lacks concrete action verbs, explicit trigger guidance ('Use when...'), and natural user-facing keywords. It reads more like a tagline than a functional description that would help Claude reliably select this skill from a large pool.

Suggestions

Add an explicit 'Use when...' clause with trigger scenarios, e.g., 'Use when the user wants to create a new slash command, build a custom skill, or scaffold a .md command file.'

List specific concrete actions such as 'generates YAML frontmatter, creates markdown command files, configures MCP tool declarations, and validates command structure.'

Include natural keyword variations users might say: 'slash command', 'custom command', 'new skill', 'command template', '.md file'.

DimensionReasoningScore

Specificity

Names the domain ('Claude commands') and mentions some aspects ('proper structure, patterns, MCP tool integration'), but doesn't list specific concrete actions like 'generate YAML frontmatter, create markdown templates, configure tool permissions'.

2 / 3

Completeness

Describes what it does ('creating new Claude commands with proper structure...') but has no explicit 'Use when...' clause or equivalent trigger guidance, which per the rubric caps completeness at 2, and the 'what' itself is also somewhat vague, placing this at 1.

1 / 3

Trigger Term Quality

Includes relevant terms like 'Claude commands', 'MCP tool integration', and 'patterns', but misses common user phrasings like 'slash command', 'custom command', '.md skill file', or 'new skill'. Users might say 'create a command' or 'write a skill' which aren't well covered.

2 / 3

Distinctiveness Conflict Risk

The mention of 'Claude commands' and 'MCP tool integration' provides some distinctiveness, but 'interactive assistant' is generic and could overlap with other helper/scaffolding skills. The niche is somewhat clear but not sharply delineated.

2 / 3

Total

7

/

12

Passed

Implementation

27%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is a comprehensive but overly verbose guide to creating Claude commands. Its main strength is the detailed example session showing end-to-end command creation, but it suffers from significant redundancy (frontmatter rules are explained 3+ times), excessive explanation of concepts Claude already understands, and a monolithic structure that puts all reference material inline rather than using progressive disclosure. The workflow lacks concrete validation steps for the commands it creates.

Suggestions

Reduce content by at least 50% — eliminate redundant frontmatter explanations (currently repeated in <command_frontmatter>, <command_features>, and <generation_patterns>), remove the command categories descriptions that Claude can infer, and trim the interview process to key decision points only.

Extract the command features reference (arguments, bash execution, file references, frontmatter options table) into a separate COMMAND-FEATURES.md file and reference it, rather than inlining ~80 lines of reference documentation.

Add concrete validation steps: after command creation, include a specific check like 'Read the created file back and verify frontmatter parses correctly' or 'Run the command with test arguments to verify $ARGUMENTS substitution works'.

Consolidate the example session to be more concise — the current version narrates the thought process extensively when a before/after showing input requirements → generated command would be more token-efficient.

DimensionReasoningScore

Conciseness

Extremely verbose at ~300+ lines. Extensively explains concepts Claude already knows (what categories of commands are, how to ask interview questions, what frontmatter is). The frontmatter section alone repeats the same information multiple times with redundant examples. Much of this could be condensed to 1/3 the length without losing actionable content.

1 / 3

Actionability

Provides concrete examples of command frontmatter, file references, and a full example session with generated output. However, much of the guidance is procedural/conversational rather than executable — the 'interview process' phases are vague prompts rather than concrete steps, and the MCP tool references lack actual parameter schemas or usage details.

2 / 3

Workflow Clarity

The multi-phase interview process (Phases 1-5) provides a clear sequence, and the creation checklist is useful. However, there are no explicit validation checkpoints — the 'Test the Command' step at the end is vague ('Create example usage scenarios, Verify argument handling') with no concrete validation commands or feedback loops for error recovery.

2 / 3

Progressive Disclosure

Monolithic wall of text with everything inline. References external files like @/docs/claude-commands-guide.md and @/docs/organizational-structure-guide.md but no bundle files are provided. The command categories, features documentation, pattern research, interview process, and generation patterns are all crammed into a single file when much of this content (especially the command features reference and category descriptions) could be split into supporting files.

1 / 3

Total

6

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (563 lines); consider splitting into references/ and linking

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
NeoLabHQ/context-engineering-kit
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.