CtrlK
BlogDocsLog inGet started
Tessl Logo

specstory-yak

Analyze your SpecStory AI coding sessions in .specstory/history for yak shaving - when your initial goal got derailed into rabbit holes. Run when user says "analyze my yak shaving", "check for rabbit holes", "how distracted was I", or "yak shave score".

Install with Tessl CLI

npx tessl i github:specstoryai/agent-skills --skill specstory-yak
What are skills?

82

Does it follow best practices?

Validation for skill structure

SKILL.md
Review
Evals

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description with excellent trigger terms and completeness. The explicit 'Run when' clause with multiple natural phrases makes it easy for Claude to know when to select this skill. The main weakness is that the capabilities could be more specific about what the analysis actually produces or does beyond just 'analyze'.

Suggestions

Add 1-2 more specific actions describing what the analysis produces (e.g., 'generates distraction reports', 'calculates focus metrics', 'identifies goal drift patterns')

DimensionReasoningScore

Specificity

Names the domain (SpecStory AI coding sessions) and the core action (analyze for yak shaving/rabbit holes), but doesn't list multiple concrete actions beyond 'analyze'. Could specify what the analysis produces (e.g., reports, metrics, recommendations).

2 / 3

Completeness

Clearly answers both what (analyze SpecStory sessions for yak shaving/rabbit holes) and when (explicit 'Run when user says...' clause with multiple trigger phrases). The when clause is explicit and comprehensive.

3 / 3

Trigger Term Quality

Excellent coverage of natural trigger phrases users would say: 'analyze my yak shaving', 'check for rabbit holes', 'how distracted was I', 'yak shave score'. These are conversational and varied.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with specific niche: SpecStory AI sessions in .specstory/history directory, yak shaving analysis. The unique terminology ('yak shaving', 'yak shave score') and specific file path make conflicts with other skills very unlikely.

3 / 3

Total

11

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides excellent actionability with concrete CLI examples and clear argument documentation. However, it over-explains the yak shaving concept (which Claude knows), includes verbose output examples, and could benefit from splitting detailed reference material into separate files. The workflow lacks explicit validation steps for error handling.

Suggestions

Remove or drastically shorten the 'What Is Yak Shaving?' section - Claude understands this concept

Add error handling guidance: what happens if .specstory/history doesn't exist or is empty, and how to recover

Move the detailed 'Scoring Methodology' and 'LLM Summary Guidelines' sections to separate reference files (e.g., SCORING.md, SUMMARY_GUIDELINES.md) and link to them

DimensionReasoningScore

Conciseness

The skill includes some unnecessary explanation (the 'What Is Yak Shaving?' section explains a concept Claude already knows) and the output example is quite lengthy. However, the CLI documentation and scoring methodology are appropriately detailed without excessive padding.

2 / 3

Actionability

Provides fully executable CLI commands with clear argument documentation, concrete examples for both slash command interpretation and direct script usage, and specific output format examples. The natural language to args mapping table is particularly actionable.

3 / 3

Workflow Clarity

The 'How It Works' section outlines the process steps, but lacks validation checkpoints. The 'Present Results to User' section has clear guidelines but the overall workflow for running analysis and handling errors is implicit rather than explicit with feedback loops.

2 / 3

Progressive Disclosure

Content is reasonably organized with clear sections, but everything is in one file when the detailed scoring methodology, example outputs, and LLM summary guidelines could be split into separate reference files. The skill is somewhat long for a single SKILL.md.

2 / 3

Total

9

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.