CtrlK
BlogDocsLog inGet started
Tessl Logo

code-review

When the user asks for a code review, shares code for feedback, or says "review this", "check my code", "what's wrong with this". Also activate when reviewing a pull request or diff.

65

Quality

57%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/code-review/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

37%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is heavily lopsided—it excels at defining trigger conditions but entirely omits what the skill actually does. It reads more like an activation rule than a skill description. Adding concrete capabilities (e.g., 'Analyzes code for bugs, style issues, and performance problems') would dramatically improve it.

Suggestions

Add a 'what it does' clause listing specific actions, e.g., 'Analyzes code for bugs, security vulnerabilities, style issues, and performance improvements. Provides structured feedback with severity levels and suggested fixes.'

Restructure to lead with capabilities first, then triggers: 'Performs code reviews identifying bugs, style issues, and improvements. Use when the user asks for a code review, says "review this", or shares a pull request or diff.'

Specify output format or unique value proposition to distinguish from general coding assistance, e.g., 'Provides line-by-line feedback with categorized findings (bugs, style, performance).'

DimensionReasoningScore

Specificity

The description lacks any concrete actions describing what the skill actually does. It only describes when to activate, not what specific actions it performs (e.g., 'identify bugs', 'suggest improvements', 'check style consistency').

1 / 3

Completeness

The description answers 'when' thoroughly but completely fails to answer 'what does this do'. There is no explanation of the skill's capabilities or outputs, only activation triggers.

1 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms users would say: 'code review', 'review this', 'check my code', 'what's wrong with this', 'pull request', 'diff'. These are highly natural phrases users would actually use.

3 / 3

Distinctiveness Conflict Risk

The trigger terms are fairly specific to code review scenarios, but without describing what the skill actually does differently from a general coding assistant, there's moderate overlap risk with other code-related skills.

2 / 3

Total

7

/

12

Passed

Implementation

77%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured, highly actionable code review skill with a clear 5-step methodology, excellent output formatting, and good examples. Its main weakness is token efficiency — it includes substantial reference material (OWASP checklist, N+1 patterns, red flags) that Claude already knows, which could be trimmed or moved to separate reference files. The workflow is strong with clear sequencing and validation steps.

Suggestions

Move the OWASP Top 10 Quick Checks, N+1 Query Detection, and Language-Agnostic Red Flags sections to a separate REFERENCE.md file and link to it, since these are standard knowledge that Claude already possesses and they consume significant token budget.

Trim or remove explanatory text in checklist items (e.g., 'Are user inputs parameterized? Check SQL, NoSQL, OS command, LDAP') since Claude already knows these security concepts — a brief mention to 'check OWASP Top 10' would suffice.

DimensionReasoningScore

Conciseness

The skill is mostly efficient and well-structured, but includes some content Claude already knows well (OWASP Top 10 descriptions, N+1 query detection patterns, language-agnostic red flags). These are standard knowledge for Claude and consume significant token budget without adding novel guidance.

2 / 3

Actionability

The skill provides highly concrete, actionable guidance: a specific 5-step methodology, a detailed output template with finding categories and ID conventions, severity definitions with clear actions, and a complete example showing expected output format with specific file:line references and code fixes.

3 / 3

Workflow Clarity

The 5-step workflow is clearly sequenced with explicit ordering ('each step must be completed before moving to the next'), includes a validation checkpoint at step 1 (ask before proceeding if unclear), and the output format provides a structured template that ensures completeness. The workflow handles the review process end-to-end.

3 / 3

Progressive Disclosure

The skill references related skills (security-review, architecture-design) for chaining, which is good. However, the OWASP Top 10 checklist, N+1 query detection patterns, and language-agnostic red flags sections are substantial inline content that could be split into reference files, making the main skill leaner and better organized.

2 / 3

Total

10

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
shawnpang/startup-founder-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.