Execute this skill optimizes prompts for large language models (llms) to reduce token usage, lower costs, and improve performance. it analyzes the prompt, identifies areas for simplification and redundancy removal, and rewrites the prompt to be more conci... Use when optimizing performance. Trigger with phrases like 'optimize', 'performance', or 'speed up'.
28
13%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/packages/ai-ml-engineering-pack/skills/optimizing-prompts/SKILL.mdQuality
Discovery
27%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a specific domain (LLM prompt optimization) but is undermined by truncation, overly generic trigger terms, and a 'Use when' clause that doesn't accurately scope the skill. The trigger terms 'optimize', 'performance', and 'speed up' would cause frequent false matches with unrelated skills. The description also begins awkwardly with 'Execute this skill' which is unnecessary filler.
Suggestions
Replace generic trigger terms with domain-specific ones like 'prompt optimization', 'reduce tokens', 'token usage', 'prompt engineering', 'LLM prompt', 'prompt compression', 'shorten prompt'.
Rewrite the 'Use when' clause to be specific: 'Use when the user wants to optimize, shorten, or reduce token usage in LLM prompts, or mentions prompt engineering and cost reduction.'
Remove the 'Execute this skill' prefix and fix the truncation to ensure the full list of concrete actions is visible.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description names the domain (prompt optimization for LLMs) and lists some actions (analyzes the prompt, identifies areas for simplification and redundancy removal, rewrites the prompt), but the truncation ('more conci...') undermines completeness of the action list. It's more specific than vague but not fully comprehensive. | 2 / 3 |
Completeness | It answers 'what' (optimizes prompts for LLMs to reduce token usage) and has an explicit 'Use when' clause, but the 'when' is too generic ('optimizing performance') and doesn't accurately scope to the actual use case. The truncated description also hurts completeness. | 2 / 3 |
Trigger Term Quality | The trigger terms 'optimize', 'performance', and 'speed up' are overly generic and would match many unrelated tasks (code optimization, database performance, etc.). Key natural terms like 'prompt engineering', 'reduce tokens', 'token usage', 'LLM prompt', 'prompt compression' are missing. The triggers are misleading for the actual skill. | 1 / 3 |
Distinctiveness Conflict Risk | The trigger terms 'optimize', 'performance', and 'speed up' are extremely generic and would conflict with code optimization, database tuning, image optimization, and many other skills. The description fails to carve out a distinct niche despite the actual skill being quite specific (LLM prompt optimization). | 1 / 3 |
Total | 6 / 12 Passed |
Implementation
0%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is almost entirely boilerplate and filler content with no actionable substance. It explains what prompt optimization is (which Claude already knows) without providing any concrete techniques, patterns, heuristics, or executable examples. The examples merely describe what would happen rather than demonstrating how to do it, and multiple sections (Prerequisites, Instructions, Output, Error Handling, Resources) contain only generic placeholder text.
Suggestions
Replace the abstract descriptions with concrete, actionable optimization techniques (e.g., specific patterns for removing redundancy, rules for condensing instructions, token-counting approaches, before/after examples with actual token counts).
Remove all boilerplate sections that add no value: 'Prerequisites', 'Instructions', 'Output', 'Error Handling', 'Resources', and the redundant 'Overview' section. Cut the content to only what Claude doesn't already know.
Provide executable examples showing the actual transformation process with specific rules applied, rather than describing what 'the skill will' do in third person.
Add concrete validation criteria (e.g., 'verify the optimized prompt produces equivalent output quality by testing against the original') instead of vague guidance like 'iterate on the optimized prompt'.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose and padded with unnecessary sections. Explains concepts Claude already knows (what prompt optimization is, how LLMs work). Sections like 'Overview', 'How It Works', 'When to Use', 'Prerequisites', 'Instructions', 'Output', 'Error Handling', and 'Resources' are all filler that add no actionable value. The content repeats itself multiple times (the overview restates the title, 'How It Works' restates the overview). | 1 / 3 |
Actionability | No concrete, executable guidance whatsoever. The examples describe what 'the skill will' do in abstract terms rather than providing actual techniques, code, or specific rewriting rules. There are no concrete patterns, heuristics, token-counting methods, or executable steps Claude could follow to actually optimize a prompt. | 1 / 3 |
Workflow Clarity | The 'How It Works' section lists three vague steps (analyze, rewrite, suggest) with no specifics on how to perform any of them. No validation checkpoints, no concrete criteria for what constitutes redundancy, no measurable targets. The 'Instructions' section is completely generic boilerplate ('invoke this skill when trigger conditions are met'). | 1 / 3 |
Progressive Disclosure | Monolithic wall of text with many sections that contain no real content. References to 'prompt-architect' agent and 'llm-integration-expert' are vague with no links or paths. 'Resources' section just says 'Project documentation' and 'Related skills and commands' with no actual references. No meaningful structure or navigation. | 1 / 3 |
Total | 4 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
c8a915c
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.