Agent skill for collective-intelligence-coordinator - invoke with $agent-collective-intelligence-coordinator
39
7%
Does it follow best practices?
Impact
94%
1.74xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./.agents/skills/agent-collective-intelligence-coordinator/SKILL.mdQuality
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is essentially a label and invocation command with zero functional content. It fails on every dimension: it does not describe what the skill does, when to use it, or provide any natural trigger terms. It would be nearly impossible for Claude to correctly select this skill from a pool of available skills.
Suggestions
Add concrete actions describing what the skill does (e.g., 'Coordinates multiple agents to synthesize diverse perspectives, aggregate responses, and resolve conflicting viewpoints').
Add an explicit 'Use when...' clause with natural trigger terms (e.g., 'Use when the user asks for multi-perspective analysis, consensus building, or aggregating ideas from multiple sources').
Remove the invocation syntax from the description and replace it with functional content that helps Claude distinguish this skill from other agent or collaboration skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description contains no concrete actions whatsoever. 'Agent skill for collective-intelligence-coordinator' is entirely abstract with no indication of what the skill actually does. | 1 / 3 |
Completeness | Neither 'what does this do' nor 'when should Claude use it' is answered. The description only states the invocation command, providing no functional or contextual information. | 1 / 3 |
Trigger Term Quality | The only keyword is 'collective-intelligence-coordinator', which is a technical/internal name, not a natural term a user would say. No natural language trigger terms are present. | 1 / 3 |
Distinctiveness Conflict Risk | The description is so vague that it's impossible to distinguish it from any other agent skill. 'Collective intelligence coordinator' could overlap with collaboration, brainstorming, multi-agent, or aggregation skills. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
14%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill reads more like aspirational architecture documentation than actionable guidance. It is filled with buzzwords and abstract concepts (hive mind, neural nexus, Byzantine fault tolerance) without providing concrete, executable workflows. The code examples show some useful MCP tool call patterns, but the majority of the content describes what to do conceptually rather than how to do it with specific, verifiable steps.
Suggestions
Remove all conceptual explanations and buzzwords (hive mind, neural nexus, cognitive load balancing) and replace with concrete, executable tool call sequences showing exactly how to coordinate agents step-by-step.
Add explicit validation checkpoints: after writing to shared memory, show how to read it back and verify consistency; after building consensus, show how to check the consensus threshold before proceeding.
Split the coordination patterns, integration points, and quality standards into separate reference files, keeping SKILL.md as a concise quick-start with links to detailed guides.
Replace abstract directives like 'Apply weighted voting based on expertise' and 'Detect split-brain scenarios' with concrete code examples or specific decision logic that Claude can execute.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with extensive conceptual explanations Claude already understands (Byzantine fault tolerance, mesh topologies, cognitive load balancing). Heavy use of buzzwords ('neural nexus of the hive mind', 'distributed cognitive processes') that add no actionable value. The 'EVERY 30 SECONDS' requirement and much of the content reads like aspirational architecture documentation rather than a lean skill. | 1 / 3 |
Actionability | The JavaScript code blocks show specific MCP tool calls with concrete key/namespace patterns, which is somewhat actionable. However, much of the content is abstract direction ('Apply weighted voting based on expertise', 'Detect split-brain scenarios', 'Implement quorum-based recovery') without concrete implementation details or executable steps. | 2 / 3 |
Workflow Clarity | No clear sequential workflow with validation checkpoints. The 'Handoff Patterns' section lists abstract flows (Receive inputs → Build consensus → Distribute decisions) without concrete steps. The 'EVERY 30 SECONDS' memory requirement has no validation or error recovery mechanism. For a coordination skill involving distributed state, the lack of explicit validation and feedback loops is a significant gap. | 1 / 3 |
Progressive Disclosure | Monolithic wall of text with no references to external files. All content is inline despite being over 100 lines. The coordination patterns, integration points, and quality standards could be split into separate reference documents. No navigation structure or clear hierarchy for discovery. | 1 / 3 |
Total | 5 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
f547cec
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.