Agent skill for goal-planner - invoke with $agent-goal-planner
37
6%
Does it follow best practices?
Impact
87%
1.42xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./.agents/skills/agent-goal-planner/SKILL.mdQuality
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an extremely weak description that fails on every dimension. It provides no information about what the skill does, when it should be used, or what distinguishes it from other skills. It reads more like an internal label or invocation instruction than a functional description.
Suggestions
Add concrete actions describing what the skill does, e.g., 'Breaks down high-level goals into actionable steps, creates milestone timelines, and tracks progress toward objectives.'
Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks about setting goals, creating plans, breaking down objectives, or tracking milestones.'
Remove the invocation instruction ('invoke with $agent-goal-planner') from the description and replace it with functional content that helps Claude decide when to select this skill.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description contains no concrete actions whatsoever. 'Agent skill for goal-planner' is entirely vague and does not describe what the skill actually does. | 1 / 3 |
Completeness | Neither 'what does this do' nor 'when should Claude use it' is answered. The description only states it's an 'agent skill' and how to invoke it, providing no functional or contextual information. | 1 / 3 |
Trigger Term Quality | The only potentially relevant term is 'goal-planner', which is a technical/internal label rather than a natural keyword a user would say. There are no natural trigger terms like 'goals', 'planning', 'milestones', 'objectives', etc. | 1 / 3 |
Distinctiveness Conflict Risk | The description is so generic that it provides no distinguishing characteristics. 'Agent skill for goal-planner' could overlap with any planning, task management, or goal-setting skill. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
12%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill reads as a high-level description of GOAP concepts rather than actionable instructions for Claude. It extensively explains AI planning concepts Claude already understands, provides no executable code or concrete workflows, and the MCP examples appear to be illustrative pseudocode rather than real tool invocations. The skill would benefit from being completely rewritten to focus on specific, concrete actions Claude should take when invoked.
Suggestions
Replace the abstract capability descriptions with concrete, step-by-step instructions Claude should follow when this agent is invoked (e.g., 'When given a goal, first list all available tools, then map each tool to preconditions and effects').
Provide real, executable MCP tool calls with actual parameters and expected responses, not illustrative pseudocode.
Add validation checkpoints with specific criteria (e.g., 'Before executing the plan, verify each action's preconditions are satisfiable given the current state').
Remove the explanations of well-known concepts (A* search, OODA loop, GOAP) and instead provide a concrete template or schema for how plans should be structured and executed.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with extensive explanation of concepts Claude already knows (A* search, OODA loops, precondition analysis). The bullet-point lists of 'core capabilities' and methodology steps read like a resume rather than actionable instructions. Most of the content describes what GOAP is rather than providing specific guidance Claude can use. | 1 / 3 |
Actionability | The skill provides no executable or concrete guidance. The 'MCP Integration Examples' are pseudocode-like snippets with made-up function signatures that aren't clearly tied to real tools. The planning methodology is entirely abstract ('Use A* pathfinding to search through possible action sequences') with no concrete implementation or steps Claude can actually follow. | 1 / 3 |
Workflow Clarity | There is a numbered sequence (State Assessment → Action Analysis → Plan Generation → Execution Monitoring → Dynamic Replanning) which provides some structure, but it lacks validation checkpoints, concrete decision criteria, and feedback loops for error recovery. The OODA loop is mentioned but not operationalized with specific checks or conditions. | 2 / 3 |
Progressive Disclosure | The content is a monolithic wall of text with no references to external files and no clear separation between overview and detailed content. Everything is dumped into a single document with no navigation structure or pointers to supplementary materials. | 1 / 3 |
Total | 5 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
398f7c2
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.