CtrlK
BlogDocsLog inGet started
Tessl Logo

insights

Show session analytics, learning patterns, correction trends, heatmaps, and productivity metrics. Computes stats from project memory and session history. Use when asking for stats, statistics, progress, how am I doing, coding history, or dashboard.

83

1.02x
Quality

75%

Does it follow best practices?

Impact

99%

1.02x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/insights/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a well-crafted skill description that clearly communicates its purpose, lists specific capabilities, and provides explicit trigger guidance. It uses third person voice correctly, covers natural user phrasings, and occupies a distinct niche around personal coding analytics and progress tracking. Minor improvement could include mentioning specific output formats (e.g., charts, tables) but overall this is strong.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'session analytics, learning patterns, correction trends, heatmaps, and productivity metrics' along with the method 'Computes stats from project memory and session history.'

3 / 3

Completeness

Clearly answers both 'what' (show session analytics, learning patterns, correction trends, heatmaps, productivity metrics; computes stats from project memory and session history) and 'when' (explicit 'Use when' clause with multiple trigger phrases).

3 / 3

Trigger Term Quality

Includes a strong set of natural keywords users would actually say: 'stats', 'statistics', 'progress', 'how am I doing', 'coding history', 'dashboard'. These cover both formal and casual phrasings.

3 / 3

Distinctiveness Conflict Risk

The combination of session analytics, learning patterns, correction trends, heatmaps, and productivity metrics creates a clear niche. The triggers like 'dashboard', 'how am I doing', and 'coding history' are distinctive and unlikely to conflict with other skills.

3 / 3

Total

12

/

12

Passed

Implementation

50%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill effectively communicates what the analytics output should look like through detailed examples, and provides concrete data-gathering commands. However, it falls short on actionability because it lacks the actual computation logic—Claude would need to infer how to count corrections, calculate rates, determine staleness thresholds, and generate the heatmap from raw data. The skill reads more like a product spec than an executable instruction set.

Suggestions

Add concrete computation logic: how to parse LEARNED.md entries, count corrections (regex patterns or markers to look for), calculate correction rates, and determine staleness thresholds (e.g., 'stale = no application in 30+ days').

Include error handling for missing data sources—what to show when git log is empty, LEARNED.md doesn't exist, or there's no session history to compute productivity metrics from.

Replace or supplement the lengthy output mockups with a concise output schema/template, moving the detailed examples to a separate reference file if needed.

Add explicit fallback behavior: what constitutes a 'correction' marker in session history, and how to handle first-session scenarios where there's no historical data for trends.

DimensionReasoningScore

Conciseness

The skill is moderately efficient but includes verbose example output blocks that are largely illustrative rather than instructive. The ASCII bar charts and formatted output examples take significant space, though they do serve to define the expected output format.

2 / 3

Actionability

The data gathering commands are concrete and executable, but the core computation logic (how to count corrections, calculate percentages, determine 'stale' learnings, compute productivity metrics) is entirely absent. The skill shows what the output should look like but doesn't provide the actual logic to get there—it's more of a mockup than executable guidance.

2 / 3

Workflow Clarity

The skill has a clear sequence (gather data → compute metrics → display), but lacks explicit steps for the computation phase. There's no validation or error handling for cases like missing git history, empty learning logs, or no session data. The 'Data Sources' section shows gathering but doesn't sequence the full workflow.

2 / 3

Progressive Disclosure

The content is reasonably organized with clear sections (Data Sources, What It Shows, Guardrails, Output), but the extensive output examples make it somewhat monolithic. The four analytics subsections could potentially be split into separate reference files, and there are no cross-references to related skills or documentation.

2 / 3

Total

8

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
rohitg00/pro-workflow
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.