CtrlK
BlogDocsLog inGet started
Tessl Logo

tdg-personal/prompt-optimizer

>-

84

Quality

84%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Overview
Quality
Evals
Security
Files

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an exceptionally well-crafted skill description that excels across all dimensions. It provides specific concrete actions, comprehensive trigger terms in both English and Chinese, explicit positive and negative trigger conditions, and clear boundaries that distinguish it from related skills like code optimization. The inclusion of DO NOT TRIGGER clauses is a best practice that minimizes false activation.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'Analyze raw prompts, identify intent and gaps, match ECC components (skills/commands/agents/hooks), and output a ready-to-paste optimized prompt.' Also specifies the advisory-only constraint.

3 / 3

Completeness

Clearly answers both 'what' (analyze prompts, identify intent/gaps, match ECC components, output optimized prompt) and 'when' (explicit TRIGGER and DO NOT TRIGGER clauses with specific phrases). The explicit trigger guidance is thorough and well-structured.

3 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms including 'optimize prompt', 'improve my prompt', 'how to write a prompt for', 'help me prompt', 'rewrite this prompt', plus Chinese equivalents. Also includes explicit negative triggers to avoid false matches like 'optimize code' or 'optimize performance'.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with explicit DO NOT TRIGGER clauses that disambiguate from code optimization, performance tuning, and direct task execution. The negative triggers ('优化代码', 'optimize performance', 'just do it') significantly reduce conflict risk with other skills.

3 / 3

Total

12

/

12

Passed

Implementation

62%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is highly actionable with excellent workflow clarity — the 6-phase pipeline is well-structured with clear decision points and validation gates. However, it is severely over-engineered for a context-window-consuming skill file: extensive lookup tables, three full examples, and tech-stack catalogs that Claude could reason about without explicit enumeration consume excessive tokens. The content would benefit significantly from externalizing reference tables and examples into separate files.

Suggestions

Extract the large lookup tables (Phase 1 intent classification, Phase 3 ECC component matching by intent and tech stack, Phase 5 model recommendations) into a separate REFERENCE.md file and link to it, reducing the main skill by ~100 lines.

Move the three full examples into an EXAMPLES.md file — keep at most one brief inline example and reference the rest.

Remove model version numbers (Sonnet 4.6, Opus 4.6) which are time-sensitive; replace with generic tiers like 'fast model' and 'reasoning model' or place in a clearly marked configuration section.

Trim Phase 0 tech stack detection list — Claude already knows how to detect project types from manifest files; a single sentence like 'Detect tech stack from project manifest files' suffices.

DimensionReasoningScore

Conciseness

Extremely verbose at ~350+ lines. Contains extensive tables that Claude already knows how to reason about (intent classification, signal words), explains basic concepts like what REST endpoints are, and includes large lookup tables for tech stack detection that could be summarized or externalized. The model recommendation table with specific version numbers (Sonnet 4.6, Opus 4.6) adds time-sensitive bloat.

1 / 3

Actionability

Highly actionable with a concrete 6-phase pipeline, specific output format with exact section structure, complete examples showing input→output transformations, and clear decision tables mapping intents to specific commands/skills/agents. The examples are copy-paste ready and demonstrate real usage.

3 / 3

Workflow Clarity

The 6-phase pipeline (Phase 0-5) is clearly sequenced with explicit checkpoints. Phase 4 includes a validation gate ('if 3+ critical items are missing, ask clarification questions before proceeding'). The workflow includes feedback loops and clear decision points at each phase, with scope-based branching for different complexity levels.

3 / 3

Progressive Disclosure

The skill is monolithic — all content is inline in a single file with no references to external files for the extensive lookup tables (tech stack matching, intent classification, ECC component mapping). The Related Components table at the end provides some navigation, but the massive inline tables for Phase 1-3 could be externalized. The examples section alone is ~100 lines that could be in a separate file.

2 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Reviewed

Table of Contents