Launch and navigate the ccboard TUI/Web dashboard for Claude Code. Use when monitoring token usage, tracking costs, browsing sessions, or checking MCP server status across projects.
70
63%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Risky
Do not use without reviewing
Optimize this skill with Tessl
npx tessl skill review --optimize ./examples/skills/ccboard/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-crafted skill description that clearly identifies the tool (ccboard), its purpose (dashboard for Claude Code), and explicit trigger conditions covering monitoring, cost tracking, session browsing, and MCP server status. It uses third person voice, is concise, and provides strong distinctiveness through specific tool and domain references.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Launch and navigate the ccboard TUI/Web dashboard', 'monitoring token usage', 'tracking costs', 'browsing sessions', 'checking MCP server status across projects'. | 3 / 3 |
Completeness | Clearly answers both what ('Launch and navigate the ccboard TUI/Web dashboard for Claude Code') and when ('Use when monitoring token usage, tracking costs, browsing sessions, or checking MCP server status across projects') with an explicit 'Use when' clause. | 3 / 3 |
Trigger Term Quality | Includes strong natural keywords users would say: 'token usage', 'tracking costs', 'sessions', 'MCP server status', 'dashboard', 'ccboard', 'TUI/Web'. These cover the natural ways a user would refer to monitoring and cost tracking. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive — references a specific tool ('ccboard'), a specific context ('Claude Code'), and specific use cases (token usage, costs, MCP server status) that are unlikely to overlap with other skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
27%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill reads like a project README copy-pasted into a SKILL.md file. It contains extensive information Claude doesn't need (architecture, performance benchmarks, contributing guidelines, credits, license) while burying the actionable content. The core value — knowing which commands to run and which keyboard shortcuts to use — is present but diluted by excessive documentation.
Suggestions
Remove non-actionable sections entirely: Architecture, Performance, Contributing, Credits, License, and Future Roadmap. These waste tokens and don't help Claude use the tool.
Condense the 8-tab feature descriptions into a compact reference table (tab name, key, primary use case) rather than detailed bullet lists for each tab.
Add decision logic: 'When user asks about costs → /costs, when user asks about MCP issues → /mcp-status' to make the skill more actionable for Claude.
Move detailed keyboard shortcuts and troubleshooting into a separate reference file and link to it from a concise overview.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at 300+ lines. Includes architecture details, performance stats, contributing guidelines, credits, license info, and future roadmap — none of which help Claude use the tool. Much of this is README content, not skill content. Claude doesn't need to know binary size, memory usage, or which Rust crates were used. | 1 / 3 |
Actionability | Provides concrete commands (/dashboard, /mcp-status, /costs) and keyboard shortcuts, which is useful. However, the usage examples are mostly comments describing what to do rather than showing expected outputs or decision logic. The skill tells Claude what tabs exist but doesn't clearly guide when to use which command for a given user request. | 2 / 3 |
Workflow Clarity | The usage examples section provides some sequenced workflows (daily monitoring, MCP troubleshooting, session analysis), but they lack validation checkpoints. For example, the troubleshooting section lists steps but doesn't create feedback loops — there's no 'if this fails, try that' structure beyond basic checks. | 2 / 3 |
Progressive Disclosure | Everything is crammed into a single monolithic file with no references to external documents. The detailed tab descriptions, architecture, performance, contributing, credits, and license sections should either be removed or split into separate reference files. The content reads like a full project README rather than a focused skill document. | 1 / 3 |
Total | 6 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
4ef3dec
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.