tessl i github:giuseppe-trisciuoglio/developer-kit --skill prompt-engineeringThis skill should be used when creating, optimizing, or implementing advanced prompt patterns including few-shot learning, chain-of-thought reasoning, prompt optimization workflows, template systems, and system prompt design. It provides comprehensive frameworks for building production-ready prompts with measurable performance improvements.
Validation
75%| Criteria | Description | Result |
|---|---|---|
description_trigger_hint | Description may be missing an explicit 'when to use' trigger hint (e.g., 'Use when...') | Warning |
metadata_version | 'metadata' field is not a dictionary | Warning |
license_field | 'license' field is missing | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 12 / 16 Passed | |
Implementation
35%This skill is overly verbose and explains many concepts Claude already understands (prompt engineering fundamentals, what few-shot learning is, etc.). While it provides structural frameworks, the examples are placeholder-based rather than executable, and the workflows lack concrete validation checkpoints. The document would benefit significantly from being condensed to essential, novel information with real working examples.
Suggestions
Remove explanations of basic concepts Claude already knows (what few-shot learning is, what CoT means, etc.) and focus only on project-specific patterns or novel techniques
Replace placeholder templates like `{representative_input}` with actual working prompt examples that demonstrate the patterns in action
Add explicit validation checkpoints to workflows, such as 'Run test suite: `python test_prompts.py` - proceed only if >90% pass rate'
Move detailed content (template structures, optimization frameworks) to reference files and keep SKILL.md as a concise overview under 100 lines
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with extensive explanations of concepts Claude already knows (what few-shot learning is, what chain-of-thought means, basic prompt engineering concepts). The document is padded with unnecessary context like 'This skill provides comprehensive frameworks' and explains obvious things like 'Elicit step-by-step reasoning for complex problem-solving.' | 1 / 3 |
Actionability | Provides template structures and frameworks but they are largely pseudocode/placeholders (e.g., `{representative_input}`, `{expected_output}`) rather than executable examples. The 'Usage Examples' section describes what to do rather than showing actual working prompts with real content. | 2 / 3 |
Workflow Clarity | Workflows are listed with numbered steps but lack explicit validation checkpoints and feedback loops. For example, 'Validate and Test' is mentioned but no concrete validation commands or criteria for when to proceed vs. iterate are provided. Missing specific verification steps for prompt optimization. | 2 / 3 |
Progressive Disclosure | References external files appropriately (references/few-shot-patterns.md, etc.) but the main document is a monolithic wall of text with excessive inline content that should be in reference files. The core document tries to cover everything rather than being a concise overview pointing to detailed materials. | 2 / 3 |
Total | 7 / 12 Passed |
Activation
67%The description adequately covers when to use the skill and names relevant prompt engineering techniques, but relies on abstract language ('comprehensive frameworks', 'production-ready') rather than concrete actions. It would benefit from more specific action verbs and additional natural trigger terms users might actually say.
Suggestions
Replace abstract phrases like 'comprehensive frameworks' and 'measurable performance improvements' with concrete actions (e.g., 'design system prompts, structure few-shot examples, add reasoning steps')
Add more natural trigger terms users would say, such as 'write a prompt', 'improve my prompt', 'prompting techniques', 'LLM instructions', or 'prompt template'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (prompt engineering) and lists several techniques (few-shot learning, chain-of-thought reasoning, template systems), but uses abstract language like 'comprehensive frameworks' and 'measurable performance improvements' without concrete actions. | 2 / 3 |
Completeness | Explicitly answers both what ('creating, optimizing, or implementing advanced prompt patterns') and when ('should be used when creating, optimizing, or implementing...') with clear trigger guidance at the start. | 3 / 3 |
Trigger Term Quality | Includes some relevant keywords like 'prompt', 'few-shot learning', 'chain-of-thought', 'system prompt', but misses common user variations like 'write a prompt', 'improve my prompt', 'prompting', or 'LLM instructions'. | 2 / 3 |
Distinctiveness Conflict Risk | Reasonably specific to prompt engineering domain, but could overlap with general coding/writing skills. Terms like 'template systems' and 'optimization workflows' are somewhat generic and could trigger conflicts. | 2 / 3 |
Total | 9 / 12 Passed |
Reviewed
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.