Multi-repository coordination, synchronization, and architecture management with AI swarm orchestration
39
11%
Does it follow best practices?
Impact
88%
2.83xAverage score across 3 eval scenarios
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./.claude/skills/github-multi-repo/SKILL.mdQuality
Discovery
22%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description relies heavily on abstract buzzwords and lacks concrete actions or explicit trigger guidance. While 'multi-repository' provides some domain specificity, the overall description is too vague to reliably distinguish this skill from others or help Claude know when to select it. It reads more like a marketing tagline than a functional skill description.
Suggestions
Add a 'Use when...' clause with specific trigger scenarios, e.g., 'Use when the user needs to coordinate changes across multiple repositories, sync dependencies between repos, or manage cross-repo workflows.'
Replace abstract terms like 'AI swarm orchestration' with concrete actions, e.g., 'Spawns parallel sub-agents to work across multiple git repositories, merges cross-repo changes, and resolves inter-repo dependency conflicts.'
Include natural user keywords like 'multiple repos', 'cross-repo', 'monorepo', 'repo dependencies', 'multi-project' to improve trigger term coverage.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses abstract, buzzword-heavy language like 'coordination', 'synchronization', 'architecture management', and 'AI swarm orchestration' without listing any concrete actions. No specific operations (e.g., 'merge branches across repos', 'sync dependencies') are mentioned. | 1 / 3 |
Completeness | The description vaguely addresses 'what' but provides no 'when' clause or explicit trigger guidance. There is no 'Use when...' or equivalent, which per the rubric should cap completeness at 2, and the 'what' itself is too vague to even reach that level. | 1 / 3 |
Trigger Term Quality | 'Multi-repository' and 'synchronization' are somewhat relevant keywords a user might use, but 'AI swarm orchestration' is jargon unlikely to appear in natural user requests. Missing common variations like 'monorepo', 'cross-repo', 'repo sync', or 'multi-project'. | 2 / 3 |
Distinctiveness Conflict Risk | 'Multi-repository' provides some niche specificity, but terms like 'coordination', 'architecture management', and 'synchronization' are broad enough to overlap with general project management, CI/CD, or architecture skills. | 2 / 3 |
Total | 6 / 12 Passed |
Implementation
0%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is an extremely verbose, largely aspirational document that reads more like a product marketing page than actionable instructions. The code examples mix invented pseudo-syntax with real shell commands, making them non-executable. Critical validation and error recovery steps are missing for destructive multi-repository operations, and the entire 500+ line document is inlined rather than being organized across referenced files.
Suggestions
Reduce the document to under 100 lines focusing on the 2-3 most common workflows, moving detailed examples and configuration references to separate files (e.g., SYNC-PATTERNS.md, ARCHITECTURE.md)
Replace pseudo-syntax code blocks with actually executable commands or clearly mark which APIs/CLIs are real vs. conceptual; remove invented CLI subcommands that don't exist
Add explicit validation checkpoints to all multi-repo workflows (e.g., 'verify PR was created successfully before proceeding', 'check npm audit results before applying fixes', 'validate all tests pass before pushing')
Remove sections that explain concepts Claude already knows (Kafka configuration, Redis setup, synchronization consistency models, webhook patterns) and focus only on the specific tool integration steps
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at 500+ lines with massive amounts of repetitive content. Many sections show CLI commands that appear to be aspirational/non-functional (e.g., `npx claude-flow skill run github-multi-repo dashboard`). Configuration examples for Kafka, Redis, webhook servers, and synchronization patterns explain infrastructure concepts Claude already knows. The same patterns (swarm init, task spawning, memory storage) are repeated across nearly every code block. | 1 / 3 |
Actionability | Most code examples are pseudocode-like or use invented APIs/CLIs that don't appear to be real executable commands. The JavaScript blocks mix pseudo-syntax (e.g., `Task("Repository Coordinator", ...)`, `[Parallel Multi-Repo Operations]:`) with real shell commands in ways that aren't executable. Many CLI commands like `npx claude-flow skill run github-multi-repo dashboard --port 3000` appear to be aspirational rather than functional. Configuration blocks (Kafka, Redis, webhook) are illustrative rather than actionable. | 1 / 3 |
Workflow Clarity | Despite showing multi-step processes involving destructive operations (pushing to repos, creating PRs, modifying files across organizations), there are no validation checkpoints or error recovery steps. The bash scripts blindly iterate over repositories without checking intermediate results. The `npm test` check in the dependency update is the only validation, but there's no feedback loop for failures beyond a comment on an issue. For operations this risky (org-wide changes), this is severely inadequate. | 1 / 3 |
Progressive Disclosure | This is a monolithic wall of text with no references to external files for detailed content. Everything is inlined—configuration schemas, architecture patterns, communication strategies, troubleshooting, monitoring, examples—resulting in an overwhelming document. The 'Related Skills' and 'Related Commands' section at the bottom hints at structure but doesn't actually organize content across files. | 1 / 3 |
Total | 4 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (875 lines); consider splitting into references/ and linking | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
8db2712
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.