CtrlK
BlogDocsLog inGet started
Tessl Logo

jbvc/prompt-engineer

Transforms user prompts into optimized prompts using frameworks (RTF, RISEN, Chain of Thought, RODES, Chain of Density, RACE, RISE, STAR, SOAP, CLEAR, GROW)

50

Quality

50%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Overview
Quality
Evals
Security
Files

Quality

Discovery

50%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description is strong on specificity and distinctiveness, clearly naming the task and listing numerous frameworks. However, it critically lacks a 'Use when...' clause, which would help Claude know when to select this skill. It also misses natural trigger terms that users would commonly use when requesting prompt improvement.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user asks to improve, optimize, rewrite, or refine a prompt, or mentions prompt engineering.'

Include natural language trigger terms users would say, such as 'improve my prompt', 'make this prompt better', 'prompt engineering', 'rewrite this prompt', or 'prompt optimization'.

DimensionReasoningScore

Specificity

The description lists a concrete action ('Transforms user prompts into optimized prompts') and enumerates multiple specific frameworks (RTF, RISEN, Chain of Thought, RODES, Chain of Density, RACE, RISE, STAR, SOAP, CLEAR, GROW), providing clear detail about what the skill does.

3 / 3

Completeness

The description answers 'what does this do' (transforms prompts using frameworks) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and since the 'when' is entirely absent, this scores a 1.

1 / 3

Trigger Term Quality

It includes some relevant keywords like 'prompts', 'optimized prompts', and specific framework names that power users might reference. However, it misses common natural language terms users would say like 'improve my prompt', 'rewrite prompt', 'prompt engineering', 'better prompt', or 'prompt optimization'.

2 / 3

Distinctiveness Conflict Risk

The skill occupies a clear niche—prompt optimization using specific named frameworks—which is highly distinctive and unlikely to conflict with other skills. The enumeration of specific framework acronyms further narrows its scope.

3 / 3

Total

9

/

12

Passed

Implementation

27%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is excessively verbose, explaining many concepts Claude already understands (prompting frameworks, task classification, detection patterns) while suffering from structural issues like missing Step 2 in the workflow and inconsistent section numbering. The framework mapping table and concrete examples provide some value, but the content could be reduced by 60-70% without losing actionable information. The monolithic structure with no external references makes it a poor fit for the SKILL.md format.

Suggestions

Reduce content by at least 60%: remove explanations of what frameworks are, eliminate detection patterns Claude already knows, and condense the NEVER/ALWAYS lists to only non-obvious rules.

Fix the broken workflow structure: add the missing Step 2 (clarification), properly number Steps 4-4.6, and ensure the sequence flows logically without gaps.

Split the framework mapping table and examples into a separate reference file (e.g., FRAMEWORKS.md) and link to it from the main skill, keeping only a concise decision heuristic inline.

Clean up the malformed blended framework example in Step 3, which currently mixes inline bracket comments with prompt text in a confusing way.

DimensionReasoningScore

Conciseness

Extremely verbose at ~200+ lines. Explains concepts Claude already knows (what prompting frameworks are, what task types exist, detection patterns for simple vs complex tasks). The framework mapping table, blending strategy explanations, and extensive NEVER/ALWAYS lists are padded with obvious guidance. Much of this could be condensed to a fraction of its size.

1 / 3

Actionability

The framework mapping table and examples provide some concrete guidance, and the example outputs show specific prompt structures. However, the workflow skips from Step 1 to Step 3 (Step 2 is missing), the blended framework example in Step 3 is incomplete/malformed (mixing inline comments with prompt text), and much of the guidance is descriptive rather than executable.

2 / 3

Workflow Clarity

Steps are listed (Analyze → Select → Generate) with a quality checklist at the end, but Step 2 is missing from the workflow, step numbering jumps to 4.5 and 4.6 without a clear Step 4 header, and the overall sequence is disjointed. The quality checks checklist is a good validation step, but the workflow structure itself is confusing.

2 / 3

Progressive Disclosure

Monolithic wall of text with everything inline—the framework mapping table, all examples, all rules, and all notes are in a single file with no references to external documents. The lengthy examples (Examples 2 and 4) and the full framework table could easily be split into separate reference files. Content is poorly organized with inconsistent heading levels.

1 / 3

Total

6

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Reviewed

Table of Contents