Track and analyze Cursor usage metrics via admin dashboard: requests, model usage, team productivity, and cost optimization. Triggers on "cursor analytics", "cursor usage", "cursor metrics", "cursor reporting", "cursor dashboard", "cursor ROI".
72
67%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/saas-packs/cursor-pack/skills/cursor-usage-analytics/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that clearly communicates specific capabilities (tracking requests, model usage, team productivity, cost optimization), identifies the tool context (Cursor admin dashboard), and provides explicit trigger terms. The description is concise, uses third person voice, and would be easily distinguishable from other skills in a large collection.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: track and analyze usage metrics, requests, model usage, team productivity, and cost optimization, all via admin dashboard. | 3 / 3 |
Completeness | Clearly answers both 'what' (track and analyze Cursor usage metrics including requests, model usage, team productivity, cost optimization) and 'when' (explicit trigger terms listed with 'Triggers on' clause). | 3 / 3 |
Trigger Term Quality | Includes explicit, natural trigger terms that users would say: 'cursor analytics', 'cursor usage', 'cursor metrics', 'cursor reporting', 'cursor dashboard', 'cursor ROI'. These are specific and cover common variations. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with a clear niche — Cursor-specific usage analytics and admin dashboard metrics. The trigger terms are all prefixed with 'cursor' making conflicts with generic analytics or other tool skills unlikely. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill reads more like a business consulting document than an actionable skill for Claude. It's heavily padded with generic advice (ROI calculations, adoption strategies, stakeholder reporting templates) that Claude already knows how to produce. The ASCII dashboard mockup consumes significant tokens without adding actionable value, and the lack of executable code or API interactions limits what Claude can actually do with this information.
Suggestions
Remove the ASCII dashboard mockup and generic business content (ROI formulas, report templates) that Claude can generate on its own—focus only on Cursor-specific information Claude wouldn't know.
Add concrete, executable actions: specific API endpoints for pulling usage data (if available), exact dashboard navigation paths, or scripts for aggregating metrics.
Split the optimization playbooks and reporting template into separate referenced files to reduce the main skill's token footprint.
Add validation steps to workflows—e.g., 'After enabling Auto mode, check usage dashboard after 1 week to verify fast request consumption decreased by X%.'
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is extremely verbose at ~150+ lines, with significant padding. The ASCII dashboard mockup adds no actionable value. ROI calculations, report templates, and adoption playbooks are generic business advice Claude already knows. Much of this content (what metrics mean, how to calculate ROI, stakeholder reporting templates) doesn't teach Claude anything new. | 1 / 3 |
Actionability | The skill provides some concrete guidance (specific URLs, metric thresholds, quota numbers) but lacks executable code or commands. The 'strategies' and 'playbooks' are mostly general advice rather than specific instructions Claude can execute. There are no API calls, scripts, or tool invocations—just organizational recommendations. | 2 / 3 |
Workflow Clarity | The optimization playbooks provide numbered steps for different scenarios (underutilized, overutilized, inconsistent), which is decent sequencing. However, there are no validation checkpoints or feedback loops—no way to verify if actions taken actually improved metrics. The workflows are more like checklists of suggestions than validated processes. | 2 / 3 |
Progressive Disclosure | The content is organized with clear headers and sections, and links to external resources at the bottom. However, the skill is monolithic—the report template, ROI calculation, and optimization playbooks could each be separate referenced files. Everything is inline in one large document rather than appropriately split. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3e83543
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.