Execute tasks following appropriate rules with rule-advisor metacognition
28
21%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/recipe-task/SKILL.mdQuality
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is extremely vague and abstract, providing no concrete information about what the skill does, when it should be used, or what domain it applies to. The phrase 'rule-advisor metacognition' is opaque jargon that would not help Claude select this skill appropriately. This description would be nearly useless in a collection of 10+ skills.
Suggestions
Replace abstract language with concrete actions: specify exactly what tasks this skill executes and what 'rules' it follows (e.g., 'Validates code changes against project linting rules and style guidelines').
Add an explicit 'Use when...' clause with natural trigger terms that users would actually say in their requests.
Define the specific domain or niche this skill covers to distinguish it from other skills (e.g., is it about code review, compliance checking, workflow automation?).
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses entirely abstract language ('execute tasks', 'appropriate rules', 'rule-advisor metacognition') with no concrete actions specified. There is no indication of what domain or specific capabilities this skill provides. | 1 / 3 |
Completeness | Both 'what' and 'when' are extremely weak. The description does not explain what the skill concretely does, nor does it provide any explicit trigger guidance or 'Use when...' clause. | 1 / 3 |
Trigger Term Quality | The terms used ('rule-advisor metacognition', 'appropriate rules') are technical jargon that no user would naturally say. There are no natural keywords that would help Claude match this skill to a user request. | 1 / 3 |
Distinctiveness Conflict Risk | 'Execute tasks following appropriate rules' is maximally generic and could apply to virtually any skill. There is nothing distinctive that would prevent conflicts with other skills. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
42%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill defines a meta-workflow for task execution using a rule-advisor subagent, but suffers from insufficient concreteness—it references JSON fields and tools without showing examples of inputs/outputs. The workflow sequence is reasonable but lacks validation checkpoints and error recovery steps. The absence of any supporting bundle files or references to external documentation makes it difficult for Claude to understand the rule-advisor's response schema or available rules.
Suggestions
Add a concrete example of the rule-advisor JSON response schema so Claude knows exactly what fields like `taskAnalysis.essence`, `selectedRules`, and `metaCognitiveGuidance` contain.
Include explicit validation/verification steps—e.g., after Step 2, verify that selectedRules are non-empty; after Step 3, confirm task list covers all rule-advisor guidance before proceeding.
Create supporting reference files (e.g., RULE_ADVISOR_SCHEMA.md, AVAILABLE_RULES.md) and link to them from the skill to improve progressive disclosure.
Provide a complete worked example showing the full flow from $ARGUMENTS input through rule-advisor invocation to TaskCreate output.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content has some redundancy—repeating 'rule-advisor' and 'from rule-advisor' excessively, and explaining steps that could be more tightly expressed. However, it's not egregiously verbose and mostly stays on-topic. | 2 / 3 |
Actionability | It provides a structured process with specific tool invocations (Agent tool with subagent_type, TaskCreate, TaskUpdate) and references to JSON response fields, but lacks concrete executable examples—no actual code, no sample JSON schema for rule-advisor output, and relies on abstract field references like `taskAnalysis.essence` without showing what these look like. | 2 / 3 |
Workflow Clarity | The four-step sequence is clearly laid out with numbered steps and sub-steps, but there are no explicit validation checkpoints or feedback loops. Step 4 mentions 'monitor warningPatterns' but provides no concrete verification or error-recovery mechanism. For a multi-step orchestration workflow, this lack of validation caps the score. | 2 / 3 |
Progressive Disclosure | Everything is in a single monolithic file with no references to supporting documentation. There's no explanation of what rule-advisor returns (no schema reference), no link to rule definitions, no external files for the JSON response format or available rules. For a skill that depends heavily on an external subagent's output structure, this is a significant gap. | 1 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
68ecb4a
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.