CtrlK
BlogDocsLog inGet started
Tessl Logo

agent-code-goal-planner

Agent skill for code-goal-planner - invoke with $agent-code-goal-planner

42

2.28x
Quality

13%

Does it follow best practices?

Impact

89%

2.28x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./.agents/skills/agent-code-goal-planner/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

0%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is essentially a label and invocation command with no substantive content. It fails on every dimension: it does not describe what the skill does, when to use it, or provide any natural trigger terms. It would be nearly impossible for Claude to correctly select this skill from a pool of available options.

Suggestions

Add concrete actions describing what the skill does, e.g., 'Breaks down coding goals into step-by-step implementation plans, identifies dependencies, and generates task lists for complex programming projects.'

Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks to plan a coding project, break down a programming task, create an implementation roadmap, or organize development goals.'

Remove the invocation syntax ('invoke with $agent-code-goal-planner') from the description, as it wastes space that should be used for capability and trigger information.

DimensionReasoningScore

Specificity

The description contains no concrete actions whatsoever. 'Agent skill for code-goal-planner' is entirely abstract and does not describe what the skill actually does.

1 / 3

Completeness

Neither 'what does this do' nor 'when should Claude use it' is answered. The description only states the skill's name and how to invoke it, providing no functional or contextual information.

1 / 3

Trigger Term Quality

There are no natural keywords a user would say. 'code-goal-planner' is an internal tool name, not a term users would naturally use in requests. The description focuses on invocation syntax rather than searchable terms.

1 / 3

Distinctiveness Conflict Risk

The term 'code-goal-planner' is vague enough to overlap with any coding, planning, or goal-setting skill. Without specific capabilities or triggers, it cannot be reliably distinguished from other skills.

1 / 3

Total

4

/

12

Passed

Implementation

27%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is excessively verbose, containing extensive conceptual frameworks (SPARC, GOAP, test pyramids, DORA metrics) that Claude already understands, dressed up with illustrative but non-executable code examples. The content reads more like a methodology whitepaper than an actionable skill file. It lacks validation checkpoints, error recovery, and any progressive disclosure structure.

Suggestions

Reduce content by 70%+ by removing explanations of concepts Claude already knows (GOAP methodology, test pyramids, DORA metrics, risk assessment categories) and keeping only the specific tool commands and integration patterns unique to this skill.

Make code examples truly executable—replace conceptual classes like SPARCGoalPlanner with actual working commands or real MCP tool invocations with correct syntax.

Add explicit validation checkpoints and error recovery steps to the workflow (e.g., 'If spec phase produces incomplete requirements, iterate before proceeding to architecture').

Split detailed YAML plan templates and metrics frameworks into separate reference files, keeping SKILL.md as a concise overview with clear pointers to those files.

DimensionReasoningScore

Conciseness

Extremely verbose at ~350+ lines. Explains general software engineering concepts Claude already knows (GOAP, test pyramids, DORA metrics, risk assessment categories). Massive amounts of illustrative YAML/code that are template-like rather than actionable. The SPARC methodology explanation is redundant padding.

1 / 3

Actionability

Contains concrete code examples and CLI commands (npx claude-flow sparc ..., MCP tool calls), but most are illustrative templates rather than truly executable guidance. The TypeScript interfaces and JavaScript classes are conceptual pseudocode dressed as real code—they reference functions like executeSPARC() and aStarSearch() that aren't defined or real.

2 / 3

Workflow Clarity

There is a general sequence (SPARC phases 1-5) and the bash example at the end shows a numbered workflow, but there are no validation checkpoints or error recovery steps. No feedback loops for when a phase fails. The workflow is more of a conceptual framework than a clear operational sequence with explicit checks.

2 / 3

Progressive Disclosure

Monolithic wall of content with no references to external files. Everything is inlined—detailed YAML plans, multiple code examples, metrics frameworks, risk assessment, CI/CD goals—all in one massive document. No bundle files exist to offload content to, and no attempt at layered organization.

1 / 3

Total

6

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
ruvnet/claude-flow
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.