Suggest better variable, function, and class names based on context and conventions.
64
45%
Does it follow best practices?
Impact
100%
1.21xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./dist/plugins/naming-analyzer/skills/naming-analyzer/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a clear capability (suggesting better names for code elements) but lacks the explicit trigger guidance required for Claude to reliably select this skill. It's moderately specific about what it names but doesn't explain when to use it or include enough natural trigger terms users would say when needing naming help.
Suggestions
Add a 'Use when...' clause with trigger phrases like 'rename', 'what should I call', 'naming conventions', 'better name for', or 'identifier names'
Expand trigger terms to include variations users naturally say: 'rename variable', 'method names', 'naming best practices', 'identifier', 'camelCase', 'snake_case'
Add more specific actions: 'Analyzes code context to suggest clearer, more descriptive names following language-specific conventions (camelCase, snake_case, etc.)'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (naming) and lists the types of things it names (variable, function, class names), but doesn't describe concrete actions beyond 'suggest' - missing details like how it analyzes context, what conventions it follows, or what output format it provides. | 2 / 3 |
Completeness | Describes what it does (suggest better names) but completely lacks a 'Use when...' clause or any explicit trigger guidance. Per rubric guidelines, missing explicit trigger guidance caps this at 2, and the 'what' is also weak, warranting a 1. | 1 / 3 |
Trigger Term Quality | Includes relevant terms like 'variable', 'function', 'class names' that users might mention, but misses common variations like 'rename', 'naming conventions', 'identifier', 'method names', or phrases like 'what should I call this'. | 2 / 3 |
Distinctiveness Conflict Risk | Focuses on naming specifically which provides some distinction, but could overlap with general code review skills, refactoring skills, or code quality skills that might also suggest naming improvements. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
57%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill provides highly actionable guidance with excellent concrete examples showing good vs bad naming patterns across multiple languages. However, it suffers from being overly verbose and monolithic—the full report template, decision tree, and extensive language-specific conventions would be better served as separate reference files. The workflow is clear but lacks validation steps for verifying naming suggestions.
Suggestions
Split content into separate files: move language-specific conventions to CONVENTIONS.md, report format to REPORT_FORMAT.md, and keep SKILL.md as a concise overview with navigation links
Add a validation step in the workflow, such as 'Verify suggested names don't conflict with existing identifiers in scope'
Condense the report format template—Claude can generate appropriate report structures without a 100+ line template
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is moderately verbose with extensive examples and report templates that could be condensed. While the examples are useful, the full report format template and decision tree add bulk that Claude could generate contextually. | 2 / 3 |
Actionability | Provides concrete, executable code examples with clear before/after comparisons. The naming conventions by language are specific and actionable, and the usage examples show exact invocation patterns. | 3 / 3 |
Workflow Clarity | The numbered instructions provide a clear sequence (analyze → identify → check → suggest), but lacks explicit validation checkpoints. For a naming analysis task, there's no feedback loop for verifying suggested names work in context. | 2 / 3 |
Progressive Disclosure | This is a monolithic wall of text with no references to external files. The extensive report format, naming conventions by language, and examples could be split into separate reference files with clear navigation from a concise overview. | 1 / 3 |
Total | 8 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
3027f20
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.