TypeScript, React, and JavaScript best practices enforced by Ultracite/Biome.
67
51%
Does it follow best practices?
Impact
92%
1.08xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./frontend/.claude/skills/code-standards/SKILL.mdQuality
Discovery
22%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description is too terse and vague to effectively guide skill selection. It names the technology stack and tooling but fails to describe concrete actions the skill performs or specify when it should be triggered. Without a 'Use when...' clause and specific capabilities, Claude would struggle to reliably select this skill from a larger pool.
Suggestions
Add specific concrete actions the skill performs, e.g., 'Lints and formats TypeScript/React/JavaScript code, fixes import ordering, enforces naming conventions, and applies Biome rules.'
Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks about code formatting, linting, code style, Biome configuration, or fixing lint errors in TypeScript/React/JS projects.'
Include common user-facing keywords like 'linting', 'formatting', 'code style', 'lint errors', 'Biome config', '.ts', '.tsx', '.js' to improve trigger term coverage.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description mentions a domain (TypeScript, React, JavaScript) and a tool (Ultracite/Biome) but does not describe any concrete actions. 'Best practices enforced' is vague — it doesn't specify what actions the skill performs (e.g., lint code, format files, fix imports). | 1 / 3 |
Completeness | The description partially addresses 'what' (enforcing best practices) but is vague about it, and completely lacks a 'when' clause or any explicit trigger guidance. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and the weak 'what' brings it to 1. | 1 / 3 |
Trigger Term Quality | Includes some relevant keywords like 'TypeScript', 'React', 'JavaScript', 'Ultracite', and 'Biome' that users might mention. However, it misses common variations like 'linting', 'formatting', 'code style', 'ESLint', or 'code quality' that users would naturally say. | 2 / 3 |
Distinctiveness Conflict Risk | Mentioning Ultracite/Biome specifically adds some distinctiveness, but 'TypeScript, React, and JavaScript best practices' is broad enough to overlap with general coding style skills, linting skills, or framework-specific skills. | 2 / 3 |
Total | 6 / 12 Passed |
Implementation
79%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured, concise skill that provides actionable commands and clear rules for code standards enforcement. Its main weaknesses are the lack of an explicit fix-validate-retry workflow and somewhat vague progressive disclosure to the rules/ directory without enumerating available rule files. Overall it's a solid, efficient skill that respects token budget.
Suggestions
Add an explicit workflow sequence: e.g., '1. Run `bun x ultracite fix` 2. Run `bun x ultracite check` to verify 3. If errors remain, consult relevant rule file from rules/ 4. Re-run check until clean'
Enumerate the available rule files in the rules/ directory (beyond the three in the quick reference table) so Claude knows what guidance is available
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is lean and efficient. It assumes Claude's competence, avoids explaining what linting or formatting is, and every section earns its place with actionable tables, commands, and rules. | 3 / 3 |
Actionability | Provides concrete, copy-paste ready bash commands for fixing, checking, and diagnosing issues. The console logging section gives specific, actionable rules with clear do/don't guidance. The quick reference table points to specific rule files. | 3 / 3 |
Workflow Clarity | The workflow is straightforward (run fix, then check), but there's no explicit sequence connecting the steps — e.g., no 'run fix, then verify with check, if errors persist consult rules/' feedback loop. For a skill involving code fixes, a validate-then-retry pattern would strengthen this. | 2 / 3 |
Progressive Disclosure | References to `rules/` directory and specific rule files (e.g., `react-functional-only.md`) are present, but no bundle files were provided to verify these exist. The references are one-level deep and clearly signaled in the table, which is good, but the `rules/` directory reference at the end is vague ('See rules/ directory for detailed guidance') without listing what's available. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
02210fa
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.