CtrlK
BlogDocsLog inGet started
Tessl Logo

optimizing-prompts

Execute this skill optimizes prompts for large language models (llms) to reduce token usage, lower costs, and improve performance. it analyzes the prompt, identifies areas for simplification and redundancy removal, and rewrites the prompt to be more conci... Use when optimizing performance. Trigger with phrases like 'optimize', 'performance', or 'speed up'.

Install with Tessl CLI

npx tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill optimizing-prompts
What are skills?

33

Does it follow best practices?

Validation for skill structure

SKILL.md
Review
Evals

Discovery

27%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description attempts to explain prompt optimization for LLMs but suffers from truncation and severely misaligned trigger terms. The generic triggers ('optimize', 'performance', 'speed up') would cause frequent conflicts with code/system optimization skills, while missing natural terms users would actually use when seeking prompt optimization help.

Suggestions

Replace generic triggers with domain-specific terms like 'prompt optimization', 'reduce tokens', 'token usage', 'prompt engineering', 'LLM prompt', 'shorten prompt'

Complete the truncated description to fully list all concrete actions the skill performs

Add distinctive qualifiers to prevent conflicts, e.g., 'Use when optimizing prompts or instructions for AI/LLM systems, NOT for code or application performance'

DimensionReasoningScore

Specificity

Names the domain (prompt optimization for LLMs) and some actions (analyzes, identifies areas for simplification, rewrites), but the description is truncated ('conci...') and doesn't provide a complete list of concrete actions.

2 / 3

Completeness

Has a 'what' (optimizes prompts for LLMs) and includes a 'Use when' clause, but the triggers are misaligned with the actual capability. 'Performance' and 'speed up' suggest code optimization, not prompt optimization, creating confusion.

2 / 3

Trigger Term Quality

The trigger terms 'optimize', 'performance', and 'speed up' are overly generic and don't match what users would naturally say when wanting prompt optimization. Users would more likely say 'reduce tokens', 'shorten prompt', 'prompt engineering', or 'LLM prompt'.

1 / 3

Distinctiveness Conflict Risk

The trigger terms 'optimize', 'performance', and 'speed up' would heavily conflict with code optimization, database optimization, or general performance tuning skills. The generic triggers make this highly likely to be incorrectly selected.

1 / 3

Total

6

/

12

Passed

Implementation

12%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is heavily padded with generic boilerplate and explains concepts Claude already understands. It lacks any concrete, actionable techniques for prompt optimization—no specific patterns, no token counting methods, no executable examples. The content describes what the skill does rather than teaching Claude how to do it.

Suggestions

Replace abstract descriptions with concrete optimization techniques (e.g., specific patterns for removing redundancy, token counting code, before/after examples with actual token counts)

Remove generic sections like 'Prerequisites', 'Instructions', 'Error Handling', and 'Resources' that provide no skill-specific value

Add executable code examples for measuring token reduction (e.g., using tiktoken library) and validating optimization quality

Consolidate the redundant Overview sections and eliminate explanations of what prompt optimization is—Claude already knows this

DimensionReasoningScore

Conciseness

Extremely verbose with redundant sections (Overview repeated twice), explains obvious concepts Claude already knows (what prompt optimization is, basic error handling patterns), and includes generic boilerplate that adds no value.

1 / 3

Actionability

No concrete code, commands, or executable guidance. The 'How It Works' section describes what the skill does abstractly rather than providing specific techniques, algorithms, or copy-paste ready examples for prompt optimization.

1 / 3

Workflow Clarity

The examples show a 3-step process (analyze, rewrite, explain), but there are no validation checkpoints, no concrete metrics for measuring token reduction, and no feedback loops for iterating on optimization quality.

2 / 3

Progressive Disclosure

Monolithic wall of text with no references to external files. Content that could be split (examples, integration details, error handling) is all inline. References to 'prompt-architect' and 'llm-integration-expert' are mentioned but not linked.

1 / 3

Total

5

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation13 / 16 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

metadata_version

'metadata' field is not a dictionary

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

13

/

16

Passed

Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.