Agent skill for collective-intelligence-coordinator - invoke with $agent-collective-intelligence-coordinator
39
7%
Does it follow best practices?
Impact
94%
1.74xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./.agents/skills/agent-collective-intelligence-coordinator/SKILL.mdQuality
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is essentially a placeholder that provides no useful information about the skill's capabilities, domain, or appropriate usage context. It only contains the invocation command, which is insufficient for Claude to make informed skill selection decisions. This is among the weakest possible descriptions.
Suggestions
Add concrete actions describing what the skill does (e.g., 'Aggregates insights from multiple sources, synthesizes group feedback, coordinates multi-agent brainstorming sessions').
Add an explicit 'Use when...' clause with natural trigger terms that describe scenarios where this skill should be selected (e.g., 'Use when the user asks for synthesizing multiple perspectives, coordinating group analysis, or aggregating diverse inputs').
Remove the invocation command from the description and replace it with functional content—the description should help Claude decide *when* to use the skill, not *how* to invoke it.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description contains no concrete actions whatsoever. 'Agent skill for collective-intelligence-coordinator' is entirely abstract with no indication of what the skill actually does. | 1 / 3 |
Completeness | Neither 'what does this do' nor 'when should Claude use it' is answered. The description only states the invocation command, providing no functional or contextual information. | 1 / 3 |
Trigger Term Quality | The only keyword is 'collective-intelligence-coordinator', which is technical jargon and not something a user would naturally say. There are no natural trigger terms like specific tasks or domains. | 1 / 3 |
Distinctiveness Conflict Risk | The term 'collective-intelligence-coordinator' is vague enough that it's unclear what domain this covers, making it impossible to distinguish from other skills. It could overlap with collaboration, research, or aggregation skills. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
14%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill reads more like aspirational architecture documentation filled with buzzwords than a practical, actionable guide. While it includes some concrete MCP tool call examples, the majority of the content is abstract descriptions of complex distributed systems concepts without executable implementation details. The skill would benefit enormously from being stripped down to concrete, step-by-step workflows with real validation checkpoints.
Suggestions
Replace abstract descriptions ('Apply weighted voting based on expertise', 'Detect split-brain scenarios') with concrete, executable code examples or step-by-step procedures showing exactly how to perform these operations.
Add explicit validation checkpoints to workflows - e.g., after writing collective state, verify the write succeeded; after building consensus, validate the consensus threshold before proceeding.
Remove conceptual explanations Claude already knows (Byzantine fault tolerance, mesh vs hierarchical topologies) and replace with only the specific implementation choices and parameters relevant to this system.
Extract detailed coordination patterns and knowledge integration procedures into separate referenced files, keeping SKILL.md as a concise overview with clear navigation to detailed guides.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with extensive conceptual explanations Claude already understands (Byzantine fault tolerance, mesh topologies, cognitive load balancing). Heavy use of buzzwords ('neural nexus of the hive mind', 'distributed cognitive processes') that add no actionable value. The 'EVERY 30 SECONDS' requirement and much of the content reads like aspirational architecture documentation rather than a lean skill. | 1 / 3 |
Actionability | The JavaScript code blocks show specific MCP tool calls with concrete key/namespace patterns, which is somewhat actionable. However, much of the content is abstract direction ('Apply weighted voting based on expertise', 'Implement quorum-based recovery', 'Detect split-brain scenarios') without concrete implementation details or executable examples of how to actually do these things. | 2 / 3 |
Workflow Clarity | Despite describing complex multi-step coordination processes, there are no clear sequential workflows with validation checkpoints. The 'Handoff Patterns' section lists abstract sequences without concrete steps. There's no error recovery workflow, no validation steps, and no feedback loops despite the skill involving distributed state management where these are critical. | 1 / 3 |
Progressive Disclosure | The content is a monolithic wall of text with no references to external files for detailed topics. Complex subjects like consensus building, cognitive load balancing, and coordination patterns are all inlined at a shallow level rather than being properly structured with overview + detailed references. The document is long but lacks depth in any area. | 1 / 3 |
Total | 5 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
0f7c750
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.