Expert skill for prompt engineering and task routing/orchestration. Covers secure prompt construction, injection prevention, multi-step task orchestration, and LLM output validation for JARVIS AI assistant.
68
58%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/prompt-engineering/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a specialized domain (prompt engineering for JARVIS) and lists relevant capability areas, but suffers from abstract language and completely lacks explicit trigger guidance. The absence of a 'Use when...' clause makes it difficult for Claude to know when to select this skill over others, and the capabilities listed are more like category labels than concrete actions.
Suggestions
Add an explicit 'Use when...' clause with trigger terms like 'prompt injection', 'routing logic', 'JARVIS prompts', 'LLM security', or 'task orchestration'.
Replace abstract capability labels with concrete actions, e.g., 'Constructs secure system prompts, validates LLM outputs for safety, routes multi-step tasks between agents' instead of 'Covers secure prompt construction'.
Include natural user phrases that would trigger this skill, such as 'help me write a prompt', 'prevent prompt injection', or 'orchestrate agent tasks'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (prompt engineering, task routing/orchestration) and lists some actions (secure prompt construction, injection prevention, multi-step task orchestration, LLM output validation), but these are somewhat abstract rather than concrete actionable tasks. | 2 / 3 |
Completeness | Describes what the skill covers but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, missing explicit trigger guidance caps this at 2, but the 'what' is also weak, warranting a 1. | 1 / 3 |
Trigger Term Quality | Includes some relevant terms like 'prompt engineering', 'injection prevention', 'orchestration', and 'JARVIS AI assistant', but missing common user variations like 'prompts', 'chain of thought', 'agent routing', or 'prompt security'. | 2 / 3 |
Distinctiveness Conflict Risk | The mention of 'JARVIS AI assistant' provides some specificity, but terms like 'prompt engineering' and 'task orchestration' are broad enough to potentially overlap with general coding or AI-related skills. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
85%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a strong, actionable skill with excellent code examples covering security-critical prompt engineering patterns. The workflow clarity is particularly good with TDD approach and explicit validation checkpoints. Main weakness is some verbosity in the overview and persona sections that could be trimmed to respect token budget.
Suggestions
Remove the 'You are an expert' framing and 'You excel at' list in the overview - Claude doesn't need this self-description
Condense Section 2 'Core Responsibilities' into the overview or remove entirely - the principles are demonstrated in the code patterns
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill contains some unnecessary verbosity, particularly in the overview section explaining what the reader 'excels at' and listing capabilities Claude already understands. The 'You are an expert' framing and some explanatory text could be trimmed, though the code examples themselves are reasonably efficient. | 2 / 3 |
Actionability | Provides fully executable Python code patterns for secure prompt building, injection detection, task routing, output validation, and orchestration. Code is copy-paste ready with clear class structures, method signatures, and concrete implementations including regex patterns and validation logic. | 3 / 3 |
Workflow Clarity | Excellent multi-step TDD workflow with explicit steps (write failing test → implement → refactor → verify). The secure_prompt_pipeline shows clear defense-in-depth sequence. Pre-deployment checklist provides validation checkpoints. Multi-step orchestration includes step limits as safety validation. | 3 / 3 |
Progressive Disclosure | Well-structured with clear overview, then detailed patterns, with consistent references to external files (references/secure-prompt-builder.md, references/injection-patterns.md, etc.) for full implementations. References are one level deep and clearly signaled throughout. | 3 / 3 |
Total | 11 / 12 Passed |
Validation
68%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 16 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (578 lines); consider splitting into references/ and linking | Warning |
description_trigger_hint | Description may be missing an explicit 'when to use' trigger hint (e.g., 'Use when...') | Warning |
metadata_version | 'metadata' field is not a dictionary | Warning |
license_field | 'license' field is missing | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 11 / 16 Passed | |
1086ef2
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.