CtrlK
CommunityDocumentationLog inGet started
Tessl Logo

optimizing-prompts

tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill optimizing-prompts

Execute this skill optimizes prompts for large language models (llms) to reduce token usage, lower costs, and improve performance. it analyzes the prompt, identifies areas for simplification and redundancy removal, and rewrites the prompt to be more conci... Use when optimizing performance. Trigger with phrases like 'optimize', 'performance', or 'speed up'.

33%

Overall

Validation

Implementation

Activation

SKILL.md
Review
Evals

Validation

81%
CriteriaDescriptionResult

skill_md_line_count

SKILL.md line count is 88 (<= 500)

Pass

frontmatter_valid

YAML frontmatter is valid

Pass

name_field

'name' field is valid: 'optimizing-prompts'

Pass

description_field

'description' field is valid (360 chars)

Pass

description_voice

'description' uses third person voice

Pass

description_trigger_hint

Description includes an explicit trigger hint

Pass

compatibility_field

'compatibility' field not present (optional)

Pass

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

metadata_version

'metadata' field is not a dictionary

Warning

metadata_field

'metadata' field not present (optional)

Pass

license_field

'license' field is present: MIT

Pass

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

body_present

SKILL.md body is present

Pass

body_examples

Examples detected (code fence or 'Example' wording)

Pass

body_output_format

Output/return/format terms detected

Pass

body_steps

Step-by-step structure detected (ordered list)

Pass

Total

13

/

16

Passed

Implementation

13%

This skill is heavily padded with generic boilerplate and explains concepts Claude already understands. It lacks any concrete, actionable techniques for prompt optimization—no specific patterns, no token counting methods, no executable examples. The content describes what the skill does rather than teaching Claude how to do it.

Suggestions

  • Replace abstract descriptions with concrete optimization techniques (e.g., specific patterns for removing redundancy, token counting code, before/after examples with actual token counts)
  • Remove generic sections like 'Prerequisites', 'Instructions', 'Error Handling', and 'Resources' that provide no skill-specific value
  • Add executable code examples for measuring token reduction (e.g., using tiktoken library) and validating optimization quality
  • Consolidate the redundant Overview sections and eliminate explanations of what prompt optimization is—Claude already knows this
DimensionReasoningScore

Conciseness

Extremely verbose with redundant sections (Overview repeated twice), explains obvious concepts Claude already knows (what prompt optimization is, basic error handling patterns), and includes generic boilerplate that adds no value.

1 / 3

Actionability

No concrete code, commands, or executable guidance. The 'How It Works' section describes what the skill does abstractly rather than providing specific techniques, algorithms, or copy-paste ready examples for prompt optimization.

1 / 3

Workflow Clarity

The examples show a 3-step process (analyze, rewrite, explain), but there are no validation checkpoints, no concrete metrics for measuring token reduction, and no feedback loops for iterating on optimization quality.

2 / 3

Progressive Disclosure

Monolithic wall of text with no references to external files. Content that could be split (examples, integration details, error handling) is all inline. References to 'prompt-architect' and 'llm-integration-expert' are mentioned but not linked.

1 / 3

Total

5

/

12

Passed

Activation

27%

This description attempts to explain prompt optimization for LLMs but suffers from truncation and severely misaligned trigger terms. The generic triggers ('optimize', 'performance', 'speed up') would cause frequent conflicts with code/system optimization skills, while missing natural terms users would actually use when seeking prompt optimization help.

Suggestions

  • Replace generic triggers with domain-specific terms like 'prompt optimization', 'reduce tokens', 'token usage', 'prompt engineering', 'LLM prompt', 'shorten prompt'
  • Complete the truncated description to fully list all concrete actions the skill performs
  • Add distinctive qualifiers to prevent conflicts, e.g., 'Use when optimizing prompts or instructions for AI/LLM systems, NOT for code or application performance'
DimensionReasoningScore

Specificity

Names the domain (prompt optimization for LLMs) and some actions (analyzes, identifies areas for simplification, rewrites), but the description is truncated ('conci...') and doesn't provide a complete list of concrete actions.

2 / 3

Completeness

Has a 'what' (optimizes prompts for LLMs) and includes a 'Use when' clause, but the triggers are misaligned with the actual capability. 'Performance' and 'speed up' suggest code optimization, not prompt optimization, creating confusion.

2 / 3

Trigger Term Quality

The trigger terms 'optimize', 'performance', and 'speed up' are overly generic and don't match what users would naturally say when wanting prompt optimization. Users would more likely say 'reduce tokens', 'shorten prompt', 'prompt engineering', or 'LLM prompt'.

1 / 3

Distinctiveness Conflict Risk

The trigger terms 'optimize', 'performance', and 'speed up' would heavily conflict with code optimization, database optimization, or general performance tuning skills. The generic triggers make this highly likely to be incorrectly selected.

1 / 3

Total

6

/

12

Passed

Reviewed

Table of Contents

ValidationImplementationActivation

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.