CtrlK
BlogDocsLog inGet started
Tessl Logo

hive-mind-advanced

Advanced Hive Mind collective intelligence system for queen-led multi-agent coordination with consensus mechanisms and persistent memory

Install with Tessl CLI

npx tessl i github:ruvnet/agentic-flow --skill hive-mind-advanced
What are skills?

41

Does it follow best practices?

Validation for skill structure

SKILL.md
Review
Evals

Discovery

7%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description relies heavily on abstract buzzwords and technical jargon without explaining what concrete actions the skill performs or when Claude should use it. It fails to provide natural trigger terms users would say and lacks the explicit 'Use when...' guidance essential for skill selection.

Suggestions

Add a 'Use when...' clause specifying concrete scenarios, e.g., 'Use when coordinating multiple agents, managing distributed tasks, or when the user mentions multi-agent workflows'

Replace abstract terms with specific actions, e.g., 'Coordinates multiple AI agents to solve complex tasks, manages agent communication, tracks shared state across sessions'

Include natural trigger terms users might say, such as 'multiple agents', 'parallel tasks', 'agent coordination', 'distributed work'

DimensionReasoningScore

Specificity

The description uses abstract buzzwords like 'collective intelligence system', 'queen-led multi-agent coordination', and 'consensus mechanisms' without describing any concrete actions Claude would perform. No specific tasks like 'creates', 'analyzes', or 'generates' are mentioned.

1 / 3

Completeness

The description only vaguely hints at 'what' (multi-agent coordination) but provides no 'when' clause or explicit triggers. There is no 'Use when...' guidance for Claude to know when to select this skill.

1 / 3

Trigger Term Quality

The terms used are highly technical jargon ('Hive Mind', 'queen-led', 'consensus mechanisms') that users would not naturally say when requesting help. No common user-facing keywords are included.

1 / 3

Distinctiveness Conflict Risk

The specific terminology ('Hive Mind', 'queen-led') is unusual enough to reduce conflicts with generic skills, but the lack of concrete actions means it could still overlap with other coordination or memory-related skills.

2 / 3

Total

5

/

12

Passed

Implementation

50%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides excellent actionable content with executable CLI commands and code examples, but is severely bloated with unnecessary explanations and inline content that should be split into reference files. The lack of validation checkpoints in workflows and the verbose explanations of basic concepts (consensus mechanisms, memory types, architecture patterns) significantly reduce its effectiveness as a quick reference.

Suggestions

Remove explanatory content about concepts Claude already knows (what LRU caches are, what Byzantine consensus means, basic architecture descriptions) - keep only the specific configuration and usage details

Move API Reference, Configuration examples, and Troubleshooting sections to separate linked files (API.md, CONFIG.md, TROUBLESHOOTING.md) to reduce main skill to under 150 lines

Add explicit validation steps to workflows, e.g., after 'npx claude-flow hive-mind spawn' add 'Verify: npx claude-flow hive-mind status should show active session'

Consolidate the worker type list into a simple table or remove entirely - Claude can infer appropriate worker types from context

DimensionReasoningScore

Conciseness

Extremely verbose at 600+ lines with extensive explanations of concepts Claude would understand (what LRU caches are, what consensus means, basic architecture patterns). The overview section alone explains obvious concepts like 'queen agents orchestrate objectives' and lists every worker type with redundant descriptions.

1 / 3

Actionability

Provides fully executable CLI commands and JavaScript code examples throughout. Commands are copy-paste ready with clear flags and options, and code snippets show complete usage patterns with realistic parameters.

3 / 3

Workflow Clarity

Steps are listed but validation checkpoints are largely missing. The 'Getting Started' section shows init/spawn/monitor but lacks explicit verification steps. Troubleshooting section exists but isn't integrated into workflows as feedback loops for error recovery.

2 / 3

Progressive Disclosure

References to related skills and external docs exist at the end, but the main content is a monolithic wall of text with everything inline. API reference, configuration examples, and troubleshooting could be separate files. The 'Related Skills' section is good but comes after 500+ lines of inline content.

2 / 3

Total

8

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (713 lines); consider splitting into references/ and linking

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.