tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill cursor-usage-analyticsTrack and analyze Cursor usage metrics. Triggers on "cursor analytics", "cursor usage", "cursor metrics", "cursor reporting", "cursor dashboard". Use when working with cursor usage analytics functionality. Trigger with phrases like "cursor usage analytics", "cursor analytics", "cursor".
Validation
81%| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
metadata_version | 'metadata' field is not a dictionary | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 13 / 16 Passed | |
Implementation
35%This skill provides a high-level outline for Cursor analytics but lacks the concrete, actionable guidance needed for effective execution. The instructions read more like a checklist of concepts than executable steps, with no specific UI paths, API endpoints, or example queries. The structure is reasonable but the content is too abstract to be immediately useful.
Suggestions
Add specific UI navigation paths (e.g., 'Click Settings > Organization > Analytics tab') or API endpoints with example requests/responses
Include concrete examples of metrics interpretation, such as 'If completion rate < 60%, check X' or sample dashboard screenshots/data
Add validation steps like 'Verify data accuracy by cross-referencing with billing reports' or 'Confirm report delivery by checking recipient inbox'
Replace vague instructions like 'Identify trends and anomalies' with specific queries or filters to apply in the dashboard
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is relatively brief but includes some unnecessary padding like 'This skill helps you' and generic prerequisites that Claude would understand. The overview could be more direct. | 2 / 3 |
Actionability | Instructions are vague and abstract ('Review key metrics', 'Identify trends') with no concrete commands, code, API calls, or specific examples of what to click or how to extract data. Describes rather than instructs. | 1 / 3 |
Workflow Clarity | Steps are listed in sequence but lack validation checkpoints, specific success criteria, or feedback loops. No guidance on what to do if metrics look wrong or how to verify report accuracy. | 2 / 3 |
Progressive Disclosure | References external files for errors and examples (good), but the main content is thin and the references use placeholder paths. The structure exists but the signaling could be clearer about what each reference contains. | 2 / 3 |
Total | 7 / 12 Passed |
Activation
40%This description suffers from vague capability statements and circular 'when to use' guidance that doesn't help Claude distinguish this skill from others. The inclusion of 'cursor' as a standalone trigger term is particularly problematic as it would cause false matches. The description needs concrete actions and more thoughtful trigger conditions.
Suggestions
Replace vague 'track and analyze' with specific actions like 'Generate usage reports, visualize coding time trends, calculate productivity metrics, export session data'
Remove the overly generic 'cursor' trigger and add natural phrases users would say like 'how much have I used Cursor', 'Cursor stats', 'my coding time in Cursor'
Rewrite the 'Use when' clause to be non-circular, e.g., 'Use when the user wants to understand their Cursor IDE usage patterns, generate productivity reports, or review coding session history'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses vague language like 'track and analyze' and 'usage metrics' without specifying concrete actions. It doesn't explain what tracking or analyzing actually involves (e.g., generate reports, visualize trends, export data). | 1 / 3 |
Completeness | Has a 'Use when' clause but it's circular and uninformative ('Use when working with cursor usage analytics functionality'). The 'what' is weak (just 'track and analyze') and the 'when' doesn't add meaningful guidance beyond restating the title. | 2 / 3 |
Trigger Term Quality | Includes relevant keywords like 'cursor analytics', 'cursor usage', 'cursor metrics', but the standalone trigger 'cursor' is overly generic and would cause false matches. Missing natural variations users might say like 'how much have I used cursor' or 'cursor stats'. | 2 / 3 |
Distinctiveness Conflict Risk | The trigger term 'cursor' alone is highly problematic and would conflict with any cursor-related skill. The more specific terms like 'cursor analytics' and 'cursor metrics' provide some distinctiveness, but the generic trigger undermines this. | 2 / 3 |
Total | 7 / 12 Passed |
Reviewed
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.