Show session analytics, learning patterns, correction trends, heatmaps, and productivity metrics. Computes stats from project memory and session history. Use when asking for stats, statistics, progress, how am I doing, coding history, or dashboard.
86
78%
Does it follow best practices?
Impact
99%
1.02xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/insights/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-crafted skill description that clearly communicates specific capabilities, uses natural trigger terms users would actually say, and explicitly addresses both what the skill does and when to use it. The description is concise yet comprehensive, covering multiple concrete outputs and data sources while maintaining a distinct identity that minimizes conflict risk with other skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'session analytics, learning patterns, correction trends, heatmaps, and productivity metrics' along with the method 'Computes stats from project memory and session history.' | 3 / 3 |
Completeness | Clearly answers both 'what' (show session analytics, learning patterns, correction trends, heatmaps, productivity metrics; computes stats from project memory and session history) and 'when' (explicit 'Use when' clause with multiple trigger phrases). | 3 / 3 |
Trigger Term Quality | Includes a strong set of natural keywords users would actually say: 'stats', 'statistics', 'progress', 'how am I doing', 'coding history', 'dashboard'. These cover both formal and casual phrasings. | 3 / 3 |
Distinctiveness Conflict Risk | The combination of session analytics, learning patterns, correction trends, heatmaps, and productivity metrics creates a clear niche. The triggers like 'dashboard', 'how am I doing', and 'coding history' are distinctive and unlikely to conflict with other skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
57%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill provides a clear vision of what analytics output should look like with well-formatted example outputs, and the data source section gives concrete commands. However, it lacks the computational logic connecting raw data to the displayed metrics — how to count corrections, track application frequency, or compute correction rates is left entirely to inference. The skill reads more like a specification/mockup than an executable guide.
Suggestions
Add concrete logic or pseudocode for computing key metrics (correction counting, learning application tracking, category classification) from the raw data sources — the gap between 'cat these files' and 'display these formatted stats' is too large.
Include guidance on handling missing or incomplete data (e.g., no git history, no LEARNED.md, first session with no historical data) to make the skill robust across different project states.
Remove the 'Output' section or merge it into 'What It Shows' since they convey the same information, improving conciseness.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is moderately efficient but includes some redundancy — the 'Output' section at the end largely restates what the 'What It Shows' section already demonstrates through examples. The example output blocks are useful but could be slightly tighter. | 2 / 3 |
Actionability | The data gathering commands are concrete and executable, but the core analytics computation is entirely implicit — there's no actual code or logic for counting corrections, computing percentages, tracking application counts, or generating the heatmap. The output examples show what to display but not how to compute it from the raw data. | 2 / 3 |
Workflow Clarity | There's a logical flow (gather data → compute metrics → display), but the steps aren't explicitly sequenced. The skill jumps from data sources directly to output format without describing the computation/aggregation step. No validation checkpoints exist for verifying data availability or handling missing sources beyond the fallback in the bash commands. | 2 / 3 |
Progressive Disclosure | For a skill of this size (~90 lines), the content is well-organized into clear sections (Trigger, Data Sources, What It Shows with subsections, Guardrails, Output). No external references are needed and the structure supports easy scanning. | 3 / 3 |
Total | 9 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
1de1554
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.