CtrlK
BlogDocsLog inGet started
Tessl Logo

optimizing-prompts

Execute this skill optimizes prompts for large language models (llms) to reduce token usage, lower costs, and improve performance. it analyzes the prompt, identifies areas for simplification and redundancy removal, and rewrites the prompt to be more conci... Use when optimizing performance. Trigger with phrases like 'optimize', 'performance', or 'speed up'.

36

Quality

22%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/packages/ai-ml-engineering-pack/skills/optimizing-prompts/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

44%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies a specific domain (LLM prompt optimization) and includes both 'what' and 'when' clauses, which is good structurally. However, it is severely undermined by overly generic trigger terms that would cause frequent false matches with unrelated skills, a truncated description that cuts off mid-word, and the awkward 'Execute this skill' phrasing. The mismatch between the specific capability and the generic triggers is the primary weakness.

Suggestions

Replace generic trigger terms with domain-specific ones like 'prompt optimization', 'reduce tokens', 'token usage', 'prompt engineering', 'shorten prompt', 'LLM cost reduction', '.prompt files'.

Fix the truncated description ('more conci...') to include the complete text, and remove the 'Execute this skill' prefix which is unnecessary and awkward.

Narrow the 'Use when' clause to specifically reference prompt-related optimization rather than general 'optimizing performance', e.g., 'Use when the user wants to optimize, shorten, or reduce token usage in LLM prompts.'

DimensionReasoningScore

Specificity

The description names the domain (prompt optimization for LLMs) and some actions (analyzes the prompt, identifies areas for simplification and redundancy removal, rewrites the prompt), but the description is truncated ('more conci...') and the actions are somewhat generic rather than listing multiple distinct concrete capabilities.

2 / 3

Completeness

The description explicitly answers both 'what does this do' (optimizes prompts for LLMs to reduce token usage, lower costs, improve performance) and 'when should Claude use it' with a 'Use when' clause and trigger phrases, even though the triggers themselves are poorly chosen.

3 / 3

Trigger Term Quality

The trigger terms 'optimize', 'performance', and 'speed up' are overly generic and would match many unrelated tasks (code optimization, database performance, etc.). It lacks natural terms users would say like 'reduce tokens', 'prompt engineering', 'shorten prompt', 'token usage', or 'LLM cost'.

1 / 3

Distinctiveness Conflict Risk

The trigger terms 'optimize', 'performance', and 'speed up' are extremely generic and would conflict with code optimization, database tuning, web performance, and many other skills. The description fails to carve out a distinct niche despite the actual capability being fairly specific.

1 / 3

Total

7

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is a low-quality, boilerplate-heavy document that provides no actionable guidance for prompt optimization. It explains obvious concepts, uses generic placeholder sections, and lacks any concrete techniques, executable examples, or measurable criteria. The content reads like an auto-generated template rather than a genuine skill that would help Claude perform prompt optimization.

Suggestions

Replace the narrative examples with concrete before/after prompt pairs showing specific optimization techniques (e.g., removing filler words, consolidating instructions, using structured output formats) with token counts.

Remove all generic boilerplate sections (Prerequisites, Instructions, Output, Error Handling, Resources) that contain no skill-specific information.

Add a concrete, actionable checklist of optimization techniques (e.g., 'Remove hedging language', 'Combine redundant instructions', 'Use bullet points instead of prose', 'Specify output format explicitly') with examples for each.

Define measurable success criteria such as target token reduction percentages and include a validation step to compare original vs. optimized prompt token counts.

DimensionReasoningScore

Conciseness

Extremely verbose and padded with content Claude already knows. The 'Overview' restates the title, 'How It Works' describes obvious steps, 'When to Use' repeats the description, and sections like 'Error Handling', 'Prerequisites', 'Instructions', and 'Output' are generic boilerplate with no actionable content. Nearly every section could be eliminated or drastically shortened.

1 / 3

Actionability

No executable code, no concrete commands, no specific techniques or algorithms for prompt optimization. The examples merely describe what 'the skill will' do in vague narrative form rather than providing actual transformation rules, patterns, or executable steps Claude could follow. The 'Instructions' section is entirely generic ('Invoke this skill when trigger conditions are met').

1 / 3

Workflow Clarity

The three-step 'How It Works' is vague and lacks any concrete validation or feedback mechanism. There are no checkpoints, no criteria for evaluating whether optimization was successful (e.g., token count comparison), and no error recovery steps. The examples describe outcomes narratively rather than defining a clear, repeatable process.

1 / 3

Progressive Disclosure

Monolithic wall of text with no bundle files and no meaningful references. References to 'prompt-architect' agent and 'llm-integration-expert' are unlinked and unverifiable. Content that could be split (examples, best practices) is all inline, yet the inline content itself is shallow and unhelpful. Multiple sections ('Resources', 'Prerequisites', 'Output') contain placeholder-quality content.

1 / 3

Total

4

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.