CtrlK
BlogDocsLog inGet started
Tessl Logo

ai-development-guide

Technical decision criteria, anti-pattern detection, debugging techniques, and quality check workflow. Use when making technical decisions, detecting code smells, or performing quality assurance.

69

Quality

61%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/ai-development-guide/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

59%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description has good structural completeness with an explicit 'Use when' clause, but suffers from being too broad and abstract. It tries to cover multiple distinct concerns (decision-making, anti-patterns, debugging, QA) without enough specificity in any one area, creating high conflict risk with other skills. The trigger terms are reasonable but not comprehensive enough for the wide scope claimed.

Suggestions

Narrow the scope or clearly delineate what makes this skill distinct from general code review, debugging, or architecture skills — e.g., specify it's about applying a particular framework or checklist.

Add more natural trigger term variations in the 'Use when' clause, such as 'code review', 'tech debt', 'refactoring', 'best practices', 'architecture decisions' to improve discoverability.

Replace abstract category names like 'technical decision criteria' and 'debugging techniques' with concrete actions, e.g., 'Evaluates trade-offs between implementation approaches, identifies anti-patterns like god objects or circular dependencies, runs a structured quality checklist before merging.'

DimensionReasoningScore

Specificity

Names a domain (technical decisions, debugging, quality) and some actions (anti-pattern detection, debugging techniques, quality check workflow), but these are still fairly abstract categories rather than concrete specific actions like 'extract text from PDF' or 'generate commit messages'.

2 / 3

Completeness

Clearly answers both 'what' (technical decision criteria, anti-pattern detection, debugging techniques, quality check workflow) and 'when' with an explicit 'Use when...' clause covering three trigger scenarios.

3 / 3

Trigger Term Quality

Includes some relevant terms like 'code smells', 'quality assurance', 'debugging', and 'technical decisions' that users might say, but misses many natural variations (e.g., 'code review', 'refactor', 'best practices', 'architecture review', 'tech debt', 'linting').

2 / 3

Distinctiveness Conflict Risk

Very broad scope covering technical decisions, debugging, AND quality assurance — three areas that could easily overlap with dedicated code review skills, debugging skills, linting skills, or architecture decision skills. The description is too wide-ranging to carve out a clear niche.

1 / 3

Total

8

/

12

Passed

Implementation

62%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a comprehensive technical decision-making and quality assurance skill with strong workflow clarity and well-structured processes. Its main weaknesses are the monolithic structure that could benefit from progressive disclosure via separate files, and the use of pseudocode rather than executable examples. Some verbosity in explaining concepts Claude already understands (DRY, SRP, Rule of Three attribution) could be trimmed.

Suggestions

Split into a concise SKILL.md overview with references to separate files for each major section (e.g., ANTI_PATTERNS.md, DEBUGGING.md, QUALITY_CHECKS.md, IMPACT_ANALYSIS.md)

Remove explanations of well-known concepts (SRP, DRY, YAGNI definitions) and just reference the pattern names—Claude already knows these

Where possible, provide language-specific executable examples rather than pseudocode patterns (e.g., a real Python or TypeScript error handling example instead of abstract '<handle error>' blocks)

DimensionReasoningScore

Conciseness

The content is generally efficient but includes some explanatory text that Claude already knows (e.g., explaining what SRP means, what DRY principle is, basic concepts like 'version control'). Some sections like the anti-patterns list could be more terse, and the Martin Fowler attribution is unnecessary context.

2 / 3

Actionability

The skill provides structured guidance with pseudocode patterns and decision tables, but lacks fully executable code examples—most code blocks are pseudocode or abstract patterns rather than copy-paste ready implementations. The quality check workflow and debugging techniques are concrete but still somewhat abstract in their guidance.

2 / 3

Workflow Clarity

Multi-step processes are clearly sequenced with explicit validation checkpoints. The Quality Check Workflow has clear phases with a final quality gate, the Impact Analysis has a mandatory 3-stage process with a 'do not implement until documented' checkpoint, and the debugging section has a clear numbered procedure. Error recovery feedback loops are present in the fallback design section.

3 / 3

Progressive Disclosure

The content is well-organized with clear headers and sections, but it's a monolithic document (~250 lines) that could benefit from splitting detailed sections (e.g., debugging techniques, quality check workflow, anti-patterns) into separate referenced files. No external file references are provided despite the content covering multiple distinct topics.

2 / 3

Total

9

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
shinpr/claude-code-workflows
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.