Investigates a problem area in the codebase and finds or creates a tessl tile (rules, docs, skills) to teach agents how to handle it correctly. Use when agents keep making the same mistakes around a library, design pattern, or convention.
71
71%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Advisory
Suggest reviewing before use
Quality
Discovery
75%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description is well-structured with a clear 'what' and 'when' clause, making it functionally complete. Its main weakness is moderate specificity—the actions described are somewhat abstract ('investigates', 'finds or creates') rather than listing concrete steps. Trigger terms cover the domain but could include more natural user phrasings for recurring problems.
Suggestions
Add more concrete action verbs to improve specificity, e.g., 'Analyzes error patterns, searches existing tiles, and drafts new rules/docs/skills to prevent recurring agent mistakes.'
Expand trigger terms with natural user phrasings like 'recurring errors', 'repeated mistakes', 'teach Claude', 'codify conventions', or 'best practices documentation'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description names the domain (codebase investigation, tessl tiles) and some actions ('investigates a problem area', 'finds or creates a tessl tile'), but the actions are somewhat vague—'investigates' and 'finds or creates' are not highly concrete, and the parenthetical '(rules, docs, skills)' only partially clarifies what tiles are. | 2 / 3 |
Completeness | The description clearly answers both 'what' (investigates a problem area and finds or creates a tessl tile to teach agents) and 'when' (explicitly states 'Use when agents keep making the same mistakes around a library, design pattern, or convention'). | 3 / 3 |
Trigger Term Quality | Includes some relevant terms like 'tessl tile', 'rules', 'docs', 'skills', 'library', 'design pattern', 'convention', and 'agents keep making the same mistakes'. However, it misses common natural variations a user might say, such as 'recurring errors', 'repeated issues', 'teach Claude', 'codify knowledge', or 'best practices'. | 2 / 3 |
Distinctiveness Conflict Risk | This skill has a clear and distinctive niche—creating tessl tiles to address recurring agent mistakes. The combination of 'tessl tile' creation and 'agents making repeated mistakes' is specific enough to be unlikely to conflict with other skills. | 3 / 3 |
Total | 10 / 12 Passed |
Implementation
50%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides a reasonable high-level workflow for investigating codebase problems and creating/finding tessl tiles, with clear phasing and decision points. However, it suffers from moderate verbosity, limited concrete executable guidance (especially in Phases 1 and 3 which are largely conversational or delegated), and lacks error recovery feedback loops. The heavy delegation to tile-creator in Phase 3 means the skill is incomplete on its own for the creation path.
Suggestions
Add error handling and feedback loops for key operations — what should happen if `tessl search` returns no results or errors, if `tessl install` fails, or if `tessl status` shows unexpected state?
Trim the explanatory preamble about tile types (rules, docs, skills) — this context is better suited for the tile-creator skill itself and wastes tokens here since Claude can infer this from usage.
Make Phase 1 more actionable by providing a concrete interview template or checklist format rather than prose descriptions of what to ask.
Add a concrete example of a complete workflow execution (e.g., 'User reports Drizzle migration issues → search queries → found tile → install → verify') to make the end-to-end process tangible.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably well-structured but includes some unnecessary explanation (e.g., defining what steering rules, docs, and skills are — Claude likely knows this from context). The phase descriptions could be tighter, and some bullet points restate obvious points. | 2 / 3 |
Actionability | The skill provides a clear multi-phase process with some concrete commands (tessl install, tessl status), but Phase 1 is entirely conversational guidance rather than executable steps, and Phase 3 delegates entirely to another skill (tile-creator) rather than providing concrete authoring instructions. Key details about MCP tool invocation are vague. | 2 / 3 |
Workflow Clarity | The four phases are clearly sequenced and logically ordered, with decision points (install existing vs. create new). However, validation is minimal — only Phase 4 has a verification step, and there are no feedback loops for error recovery (e.g., what if tessl search fails, what if install fails, what if tile-creator produces incorrect output). | 2 / 3 |
Progressive Disclosure | The content is well-organized into phases with clear headers, but it's somewhat monolithic — the detailed search evaluation criteria and interview questions could be referenced separately. The delegation to tile-creator in Phase 3 is a form of progressive disclosure but feels abrupt rather than well-signaled with context about what to expect. | 2 / 3 |
Total | 8 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
Reviewed
Table of Contents