Multi-person projects - shared state, todo claiming, handoffs
31
24%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/team-coordination/SKILL.mdQuality
Discovery
22%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is too terse and abstract to effectively guide skill selection. It names a domain (multi-person projects) and lists a few concepts but fails to describe concrete actions or provide any trigger guidance for when Claude should select this skill. The lack of a 'Use when...' clause and the vague terminology significantly weaken its utility.
Suggestions
Add a 'Use when...' clause specifying explicit triggers, e.g., 'Use when multiple people are collaborating on a project, when a user needs to claim or assign tasks, or when coordinating handoffs between team members.'
Replace abstract nouns with concrete action verbs describing what the skill does, e.g., 'Manages shared project state across multiple contributors, enables claiming and assigning todo items, and facilitates structured handoffs between team members.'
Include natural language trigger terms users might say, such as 'team collaboration', 'assign task', 'delegate', 'project coordination', 'shared workspace', or 'multi-user workflow'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses vague, abstract language. 'Shared state', 'todo claiming', and 'handoffs' hint at concepts but don't describe concrete actions (e.g., 'assign tasks', 'track project status', 'transfer ownership of work items'). | 1 / 3 |
Completeness | The description loosely addresses 'what' (shared state, todo claiming, handoffs) but provides no 'when' guidance at all. There is no 'Use when...' clause or equivalent explicit trigger guidance, which per the rubric should cap completeness at 2, but the 'what' is also very weak, warranting a 1. | 1 / 3 |
Trigger Term Quality | Terms like 'multi-person projects', 'handoffs', and 'todo claiming' are somewhat relevant but miss common natural language variations users might say, such as 'team collaboration', 'assign task', 'delegate work', 'shared workspace', or 'project coordination'. | 2 / 3 |
Distinctiveness Conflict Risk | The 'multi-person projects' framing provides some niche specificity, but terms like 'shared state' and 'handoffs' are broad enough to potentially overlap with general project management, collaboration, or task tracking skills. | 2 / 3 |
Total | 6 / 12 Passed |
Implementation
27%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides a comprehensive conceptual framework for team coordination but suffers from extreme verbosity—most of the content is example templates and markdown formatting that Claude could generate from brief instructions. The lack of any validation mechanisms or tooling (just manual markdown file editing) limits its practical reliability for the coordination problem it aims to solve. The monolithic structure makes it expensive in context window tokens without proportional value.
Suggestions
Reduce to ~80-100 lines: keep the directory structure, the start/end session checklists, claim format, and handoff format as brief specs. Remove full example tables with fake data—Claude can generate these from a schema description.
Extract templates (handoff, standup, contributor, state) into separate bundle files referenced from the main skill, improving progressive disclosure and reducing the main file's token cost.
Add validation steps: a script or command to check for conflicting file claims in state.md, or at minimum explicit 'verify no conflicts' checkpoints with concrete grep/diff commands rather than just 'check state.md'.
Remove ASCII art boxes and decorative formatting—they consume significant tokens without adding actionable information.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~350+ lines. Massive ASCII art boxes, full example tables with fake data, contributor timezone diagrams, and extensive templates that Claude could generate on demand. Much of this is structural boilerplate (markdown table formats, status icons) that doesn't need to be spelled out in such detail. The core coordination protocol could be conveyed in under 100 lines. | 1 / 3 |
Actionability | Provides concrete file structures, markdown templates, and a bash git hook, which are useful. However, most 'actions' are editing markdown files manually rather than executable commands. The git hook has a bug (md5 vs md5sum), and there are no actual scripts or tools provided—just conventions for manually maintaining state files. | 2 / 3 |
Workflow Clarity | Clear checklists for starting/ending sessions and claiming todos, with a reasonable sequence. However, there are no validation checkpoints—no way to verify state.md is consistent, no conflict detection beyond manual reading, and no feedback loops for error recovery when conflicts do occur. For a coordination skill where race conditions on shared files are a real risk, this is a significant gap. | 2 / 3 |
Progressive Disclosure | Everything is in one monolithic file with no references to external files or bundle resources. The handoff template, contributor template, decision format, standup template, git hooks, and quick reference could all be separate files referenced from a concise overview. The content is a wall of templates and examples that buries the actual workflow. | 1 / 3 |
Total | 6 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
65efb33
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.