Wrap high-verbosity shell commands with RTK to reduce token consumption. Use when running git log, git diff, cargo test, pytest, or other verbose CLI output that wastes context window tokens.
79
75%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./examples/skills/rtk-optimizer/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong, well-crafted description that clearly communicates what the skill does (wraps verbose shell commands with RTK to save tokens), when to use it (specific CLI commands and verbose output scenarios), and uses natural trigger terms developers would actually say. It is concise, specific, and occupies a clear niche with minimal conflict risk.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists a concrete action ('Wrap high-verbosity shell commands with RTK') and names specific commands (git log, git diff, cargo test, pytest). Multiple concrete actions and tools are referenced. | 3 / 3 |
Completeness | Clearly answers 'what' (wrap high-verbosity shell commands with RTK to reduce token consumption) and 'when' (explicitly states 'Use when running git log, git diff, cargo test, pytest, or other verbose CLI output'). | 3 / 3 |
Trigger Term Quality | Includes natural keywords users would encounter: 'git log', 'git diff', 'cargo test', 'pytest', 'verbose CLI output', 'context window tokens', 'shell commands'. These are terms developers naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | Targets a very specific niche: wrapping verbose shell commands with RTK for token reduction. The combination of RTK, token consumption, and specific CLI commands makes it highly distinctive and unlikely to conflict with other skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
50%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill provides a useful reference table mapping verbose commands to RTK equivalents with concrete reduction percentages, which is its core strength. However, it suffers from meta-descriptions of Claude's own behavior (activation examples, usage pattern), redundancy between the supported commands and metrics sections, and some unnecessary framing. Trimming the conversational scaffolding and consolidating the data would significantly improve it.
Suggestions
Remove the 'Activation Examples' and 'How It Works' sections — Claude doesn't need instructions on how to detect user intent or a description of its own behavior pattern. Replace with a direct instruction like 'Prefix these commands with `rtk` to reduce token usage.'
Consolidate the 'Supported Commands' and 'Metrics' sections into a single reference table to eliminate redundancy, e.g., a table with columns: Original Command | RTK Command | Reduction %
Add a validation step: after running an RTK-wrapped command, verify the output contains the needed information (e.g., 'If RTK output seems truncated or missing critical details, re-run without RTK wrapper').
Move the Configuration and References sections to a separate file and link from the main skill, keeping SKILL.md focused on the command mapping and decision rules.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill includes some unnecessary content like the 'Activation Examples' section (Claude doesn't need to be shown how to detect user intent), the 'How It Works' meta-description, and the 'Limitations' section mixing GitHub stats with actual limitations. The metrics section partially duplicates the supported commands section. However, the core mapping tables are efficient. | 2 / 3 |
Actionability | The command mappings are concrete and useful, and the installation check is executable. However, the 'Usage Pattern' section is a meta-description of behavior rather than executable guidance, and the activation examples describe a conversational flow rather than providing actionable instructions. The actual RTK commands are clear but wrapped in unnecessary framing. | 2 / 3 |
Workflow Clarity | The skill presents a simple workflow (detect → suggest → execute → track) but lacks validation checkpoints. There's no guidance on what to do if RTK produces unexpected output, truncates important information, or fails. The edge cases section helps but is thin. For a tool that transforms command output, missing verification of output completeness is a gap. | 2 / 3 |
Progressive Disclosure | References to external files (docs/resource-evaluations/rtk-evaluation.md, examples/claude-md/rtk-optimized.md) are present but the main file itself is too long with inline content that could be split out (e.g., the full metrics table, configuration templates). The structure uses headers well but the content is monolithic rather than layered. | 2 / 3 |
Total | 8 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
4ef3dec
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.