CtrlK
BlogDocsLog inGet started
Tessl Logo

arn-spark-static-prototype-teams

This skill should be used when the user says "static prototype teams", "arn static prototype teams", "team static prototype", "debate static prototype", "collaborative visual review", "static prototype with debate", "team-based visual review", "visual debate", "review visuals as a team", or wants to create a static component showcase and validate it through iterative expert debate cycles where product strategist and UX specialist discuss their scores and findings before producing a combined review, with per-criterion scoring, an independent judge verdict, and versioned output. Supports Agent Teams for parallel debate or sequential simulation as fallback. For standard lower-of-two-scores visual review, use /arn-spark-static-prototype instead.

77

Quality

73%

Does it follow best practices?

Impact

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/arn-spark/skills/arn-spark-static-prototype-teams/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong description that excels across all dimensions. It provides extensive trigger phrases, clearly describes what the skill does (static prototype creation with iterative expert debate, scoring, and judge verdicts), explicitly states when to use it, and even differentiates from a related skill. The only minor weakness is that the description is quite dense and could benefit from slightly cleaner formatting for readability.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: create a static component showcase, validate through iterative expert debate cycles, per-criterion scoring, independent judge verdict, versioned output, parallel debate or sequential simulation.

3 / 3

Completeness

Clearly answers both 'what' (create static component showcase, validate through expert debate cycles with scoring and judge verdict) and 'when' (explicit trigger phrases listed at the start, plus differentiation from the simpler /arn-spark-static-prototype skill).

3 / 3

Trigger Term Quality

Provides extensive explicit trigger phrases users would say ('static prototype teams', 'collaborative visual review', 'visual debate', 'review visuals as a team', etc.) covering many natural variations.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with a clear niche (team-based debate review of static prototypes) and explicitly differentiates itself from the related /arn-spark-static-prototype skill, reducing conflict risk.

3 / 3

Total

12

/

12

Passed

Implementation

47%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill has excellent workflow clarity with well-defined phases, validation checkpoints, and error recovery paths for a genuinely complex multi-agent debate process. However, it is severely over-verbose — explaining environment variables, repeating error handling guidance across sections, and including conditional logic trees that could be compressed or extracted to reference files. The actionability suffers from being descriptive rather than providing concrete invocation syntax or executable templates.

Suggestions

Reduce the prerequisite section by at least 60% — collapse the nested conditional checks into a compact lookup table or decision tree rather than prose paragraphs with quoted prompts.

Extract the Agent Invocation Guide table and Error Handling section into separate reference files (e.g., references/agent-invocation-guide.md and references/error-handling.md) to reduce the main skill's token footprint.

Remove explanatory text that Claude can infer (e.g., what Agent Teams is, how environment variables work, what static vs clickable prototypes cover) and replace with terse directives.

Add concrete agent invocation syntax examples showing the actual tool call format rather than describing what parameters to pass in prose.

DimensionReasoningScore

Conciseness

Extremely verbose at ~500+ lines. Extensively explains concepts Claude can infer (what Agent Teams is, how environment variables work, what PDF vs static prototypes are). The prerequisite section alone is massive with deeply nested conditional logic that could be compressed significantly. Many sections repeat information (e.g., error handling duplicates guidance already given in the workflow steps).

1 / 3

Actionability

The skill provides structured steps and agent invocation patterns, but contains no executable code beyond a single `echo` command and a JSON snippet. Most guidance is procedural description rather than concrete commands or templates. The actual agent invocation syntax is never shown — it describes what to pass but not how to invoke.

2 / 3

Workflow Clarity

The multi-step workflow is thoroughly sequenced with clear phases (Phase 1-4 within Step 5c), explicit validation checkpoints (divergence check, file existence verification after Agent Teams), error recovery loops (re-invoke missing expert, retry builder 3 times), and decision points with user confirmation gates. The resume detection in Step 2 handles interrupted states well.

3 / 3

Progressive Disclosure

The skill references external files (debate-protocol.md, expert-visual-review-template.md, debate-review-report-template.md, static-prototype-criteria.md, showcase-capture-guide.md) which is good progressive disclosure structure, but no bundle files were provided to verify these exist. The SKILL.md itself is monolithic — the massive prerequisite section, error handling section, and agent invocation guide table could all be in separate reference files, keeping the main skill leaner.

2 / 3

Total

8

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (534 lines); consider splitting into references/ and linking

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
AppsVortex/arness
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.