Launch and navigate the ccboard TUI/Web dashboard for Claude Code. Use when monitoring token usage, tracking costs, browsing sessions, or checking MCP server status across projects.
70
63%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Risky
Do not use without reviewing
Optimize this skill with Tessl
npx tessl skill review --optimize ./examples/skills/ccboard/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-crafted skill description that clearly identifies the specific tool (ccboard), lists concrete capabilities, and provides explicit trigger conditions via a 'Use when...' clause. The natural keywords cover the main use cases comprehensively, and the description is concise without unnecessary padding.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Launch and navigate the ccboard TUI/Web dashboard', 'monitoring token usage', 'tracking costs', 'browsing sessions', 'checking MCP server status across projects'. | 3 / 3 |
Completeness | Clearly answers both what ('Launch and navigate the ccboard TUI/Web dashboard for Claude Code') and when ('Use when monitoring token usage, tracking costs, browsing sessions, or checking MCP server status across projects'). | 3 / 3 |
Trigger Term Quality | Includes strong natural keywords users would say: 'token usage', 'tracking costs', 'sessions', 'MCP server status', 'dashboard', 'ccboard', 'TUI/Web'. These cover the natural terms a user would use when wanting to monitor Claude Code activity. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive — 'ccboard' is a specific tool name, and the combination of TUI/Web dashboard, token usage monitoring, cost tracking, and MCP server status creates a very clear niche unlikely to conflict with other skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
27%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill reads like a project README rather than a focused skill file for Claude. It is extremely verbose, including architecture details, performance benchmarks, contributing guidelines, credits, and license information that provide zero value for Claude's task execution. The actionable content (commands, keyboard shortcuts, troubleshooting) is buried under excessive feature documentation that could be drastically condensed or split into reference files.
Suggestions
Cut the content by 60-70%: remove Architecture, Performance, Contributing, Credits, License, and Future Roadmap sections entirely — Claude doesn't need any of this to use the tool.
Restructure as a concise overview (commands table + key shortcuts) with references to separate files like TABS.md for detailed tab descriptions and TROUBLESHOOTING.md for debugging.
Condense the 8-tab feature descriptions into a brief summary table rather than exhaustive per-tab bullet lists — Claude can discover features interactively.
Add explicit workflow sequences with validation steps, e.g., 'If MCP server shows ○ Stopped: 1. Press e to check config → 2. Verify command path exists → 3. Press r to refresh → 4. If still stopped, check logs with...'
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~350+ lines. Includes architecture details, performance stats, contributing guidelines, credits, license info, and future roadmap — none of which help Claude use the tool. Much of this is README content, not skill content. Claude doesn't need to know binary size, memory usage, or which Rust crates were used. | 1 / 3 |
Actionability | Provides concrete commands (/dashboard, /mcp-status, /costs) and keyboard shortcuts, which is useful. However, the usage examples are mostly comments inside bash blocks rather than truly executable workflows, and much of the content describes features rather than instructing Claude on what to do in specific scenarios. | 2 / 3 |
Workflow Clarity | The usage examples section provides some sequenced steps (e.g., launch dashboard → press keys → check status), but they lack validation checkpoints or error recovery loops. The troubleshooting section helps but is separate from the workflows. For a monitoring tool, there's no clear guidance on what to do when anomalies are detected. | 2 / 3 |
Progressive Disclosure | Monolithic wall of text with no references to external files. Everything — from installation to architecture to contributing guidelines to credits — is crammed into a single document. Content like the full tab-by-tab feature descriptions, architecture details, and contributing info should be in separate files or omitted entirely. | 1 / 3 |
Total | 6 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
746adc8
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.