CtrlK
BlogDocsLog inGet started
Tessl Logo

agent-goal-planner

Agent skill for goal-planner - invoke with $agent-goal-planner

37

1.42x
Quality

6%

Does it follow best practices?

Impact

87%

1.42x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./.agents/skills/agent-goal-planner/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

0%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an extremely weak description that fails on every dimension. It provides no information about what the skill does, when it should be used, or what distinguishes it from other skills. It reads more like an internal label or invocation instruction than a functional description.

Suggestions

Add concrete actions describing what the skill does, e.g., 'Breaks down high-level goals into actionable steps, creates milestone timelines, and tracks progress toward objectives.'

Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks about setting goals, creating plans, breaking down objectives, or tracking milestones.'

Remove the invocation instruction ('invoke with $agent-goal-planner') from the description and replace it with functional content that helps Claude decide when to select this skill.

DimensionReasoningScore

Specificity

The description contains no concrete actions whatsoever. 'Agent skill for goal-planner' is entirely vague and does not describe what the skill actually does.

1 / 3

Completeness

Neither 'what does this do' nor 'when should Claude use it' is answered. The description only states it's an 'agent skill' and how to invoke it, providing no functional or contextual information.

1 / 3

Trigger Term Quality

The only potentially relevant term is 'goal-planner', which is a technical/internal label rather than a natural keyword a user would say. There are no natural trigger terms like 'goals', 'planning', 'milestones', 'objectives', etc.

1 / 3

Distinctiveness Conflict Risk

The description is so generic that it provides no distinguishing characteristics. 'Agent skill for goal-planner' could overlap with any planning, task management, or goal-setting skill.

1 / 3

Total

4

/

12

Passed

Implementation

12%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill reads as a conceptual description of GOAP planning rather than an actionable skill file. It spends most of its token budget explaining AI planning concepts Claude already understands, while providing almost no concrete, executable guidance for actually performing goal-oriented planning. The MCP integration examples use invalid syntax and lack sufficient context to be useful.

Suggestions

Replace the abstract capability descriptions with concrete examples: show an actual state definition, action schema with preconditions/effects, and a sample plan output that Claude should produce.

Make the MCP integration examples syntactically correct and show them in context of a real workflow (e.g., 'When the user asks to deploy, first assess state by calling X, then plan by doing Y').

Remove the explanations of well-known concepts (A* search, OODA loops, GOAP fundamentals) and replace with project-specific action inventories, state schemas, or domain constraints that Claude wouldn't otherwise know.

Add validation checkpoints to the workflow—e.g., 'Before executing a plan, verify all preconditions are met by checking X' and 'After each action, confirm expected state changes occurred before proceeding.'

DimensionReasoningScore

Conciseness

Extremely verbose with extensive explanation of concepts Claude already knows (A* search, OODA loops, precondition analysis). The bullet-point lists of 'core capabilities' and methodology steps read like a resume rather than actionable instructions. Most of the content describes what GOAP is rather than providing specific guidance Claude needs.

1 / 3

Actionability

The content is almost entirely abstract description rather than concrete instruction. The JavaScript MCP examples are not executable—they use invalid syntax (bare object notation without proper function calls) and lack context for when/how to use them. There are no real examples of state definitions, action schemas, or actual planning workflows.

1 / 3

Workflow Clarity

The 5-step methodology (State Assessment through Dynamic Replanning) provides a reasonable sequence, and the OODA loop adds structure. However, there are no validation checkpoints, no error recovery specifics, and no concrete examples of what each step actually produces. The steps describe concepts rather than executable procedures.

2 / 3

Progressive Disclosure

The content is a monolithic wall of text with no references to external files, no clear separation between quick-start and advanced content, and no navigation structure. Everything is dumped into a single document with no indication of where to find domain-specific schemas, action libraries, or detailed examples.

1 / 3

Total

5

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
ruvnet/claude-flow
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.