Applies prompt repetition to improve accuracy for non-reasoning LLMs
Install with Tessl CLI
npx tessl i github:asklokesh/loki-mode --skill prompt-optimization49
Quality
37%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Optimize this skill with Tessl
npx tessl skill review --optimize ./agent-skills/prompt-optimization/SKILL.mdQuality
Discovery
17%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is too terse and technical, lacking both natural trigger terms users would say and explicit guidance on when to use the skill. While it identifies a specific technique (prompt repetition), it fails to explain concrete actions or provide selection criteria for Claude.
Suggestions
Add a 'Use when...' clause specifying triggers like 'when user asks about improving LLM accuracy', 'prompt engineering for simpler models', or 'repeating instructions'
Include natural language terms users might say such as 'repeat instructions', 'improve model output', 'prompt engineering', or specific model names
Expand the capability description with concrete actions like 'Repeats key instructions in prompts, structures prompts for consistency, optimizes prompt format for non-CoT models'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (prompt repetition for LLMs) and one action (improve accuracy), but lacks concrete details about what specific actions are performed or how the technique is applied. | 2 / 3 |
Completeness | Only partially addresses 'what' (applies prompt repetition) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. | 1 / 3 |
Trigger Term Quality | Uses technical jargon ('prompt repetition', 'non-reasoning LLMs') that users are unlikely to naturally say. Missing common variations like 'repeat prompt', 'accuracy improvement', or model-specific terms users might mention. | 1 / 3 |
Distinctiveness Conflict Risk | The mention of 'prompt repetition' and 'non-reasoning LLMs' provides some specificity, but could overlap with other prompt engineering or LLM optimization skills without clearer boundaries. | 2 / 3 |
Total | 6 / 12 Passed |
Implementation
57%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill describes an automatic system behavior rather than providing actionable instructions for Claude. While well-structured with good progressive disclosure, it lacks concrete steps Claude should take and includes unverifiable performance claims. The skill would benefit from clarifying whether Claude needs to do anything or if this is purely informational about system behavior.
Suggestions
Clarify what action Claude should take - if this is automatic, consider whether it belongs as a skill or as system documentation
Add a verification step so Claude can confirm prompt repetition is active (e.g., check logs or a specific indicator)
Remove or move the performance metrics table to the referenced documentation, as specific percentages add tokens without actionable value
If Claude should manually apply repetition in some cases, provide explicit instructions for when and how to do so
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is mostly efficient but includes some unnecessary elements like the performance table with specific percentages that may not be verifiable, and the 'How It Works' section explains concepts Claude likely understands. The metrics section adds little actionable value. | 2 / 3 |
Actionability | Provides concrete configuration commands and environment variables, but the core functionality is described as 'automatic' with 'no action needed,' making it unclear what Claude should actually do. The skill describes a system behavior rather than providing executable guidance for Claude to follow. | 2 / 3 |
Workflow Clarity | The 'When to Activate' section lists triggers clearly, but there's no validation or verification step to confirm the optimization is working. For a skill that claims 4-5x accuracy improvement, there should be a way to verify it's active or measure its effect. | 2 / 3 |
Progressive Disclosure | Good structure with clear sections, a single reference to external documentation (references/prompt-repetition.md), and appropriate use of headers. Content is well-organized and not monolithic. | 3 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.