Auto-generates an LLM usage monitoring page in a PM admin dashboard. Tokuin CLI-based token/cost/latency tracking + user ranking system + inactive user tracking + data-driven PM insights + Cmd+K global search + per-user drilldown navigation. Supports OpenAI/Anthropic/Gemini/OpenRouter.
67
52%
Does it follow best practices?
Impact
99%
1.86xAverage score across 3 eval scenarios
Critical
Do not install without reviewing
Optimize this skill with Tessl
npx tessl skill review --optimize ./.agent-skills/llm-monitoring-dashboard/SKILL.mdPrivacy-safe metrics collection pipeline
No hardcoded API keys
100%
100%
dry-run default
0%
100%
JSONL output path
100%
100%
Prompt hashing
0%
100%
Prompt categorization
100%
100%
Keyword-based classification
100%
100%
User context fields
0%
100%
Latency fields
0%
100%
is_dry_run field
0%
62%
.env in .gitignore
100%
100%
Tokuin CLI usage
0%
100%
status_code field
0%
100%
Dashboard design tokens and drilldown navigation
Background token --bg-base
50%
100%
CSS design token variables
100%
100%
3-level traffic light system
100%
100%
Traffic light hex values
0%
100%
Badge elements for status
100%
100%
Monospace font for metrics
0%
100%
Tabular numeric alignment
0%
100%
User drilldown navigation
100%
100%
Auto-generated PM insights
100%
100%
Chart.js for charts
100%
100%
Colorblind-friendly series colors
0%
100%
Ranking medal colors
0%
100%
PM report automation and cost alerting
safety-guard.sh exists
87%
100%
PM report Markdown format
100%
100%
User ranking by cost
100%
100%
Inactive user section
100%
100%
Adoption rate metric
50%
100%
Report filename pattern
25%
100%
95% SLA success rate check
12%
100%
Configurable cost threshold
70%
100%
Alert exit code 1
100%
100%
Slack webhook from env var
100%
100%
Weekly report cron Monday 9am
0%
100%
Cost alert cron hourly
0%
100%
c033769
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.