Ask Claude, Codex, or Gemini via local CLI and capture a reusable artifact
68
60%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/ask/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description provides a basic understanding of the skill's purpose but is too terse and lacks critical guidance for skill selection. It names specific AI models which helps with distinctiveness, but the absence of explicit trigger conditions and limited action specificity significantly weaken its utility for Claude's skill selection process.
Suggestions
Add a 'Use when...' clause specifying trigger scenarios like 'Use when the user wants to query Claude, Codex, or Gemini from the command line, compare model responses, or save AI outputs as reusable files.'
Expand the action list to be more specific about what 'capture a reusable artifact' means - e.g., 'save responses to files, compare outputs across models, batch query multiple LLMs'
Include additional natural trigger terms users might say: 'LLM', 'command line', 'terminal', 'API', 'model comparison', 'AI query'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (CLI interaction with AI models) and mentions two actions ('Ask' and 'capture a reusable artifact'), but lacks comprehensive detail about what specific operations are supported or what artifacts are produced. | 2 / 3 |
Completeness | Describes what it does at a basic level but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. | 1 / 3 |
Trigger Term Quality | Includes specific model names (Claude, Codex, Gemini) and 'CLI' which users might say, but misses common variations like 'LLM', 'AI assistant', 'command line', 'terminal', or 'query'. | 2 / 3 |
Distinctiveness Conflict Risk | The specific model names (Claude, Codex, Gemini) and 'local CLI' provide some distinctiveness, but 'artifact' is vague and could overlap with other skills that generate or capture outputs. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
87%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-crafted, concise skill that efficiently documents CLI routing functionality. It excels at actionability with concrete examples and maintains excellent token efficiency. The only weakness is the implicit workflow - the relationship between verifying CLI availability and running commands could be more explicitly sequenced.
Suggestions
Consider restructuring Requirements and Usage into a numbered workflow: 1. Verify CLI availability, 2. Run the ask command, 3. Find output in artifacts path
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is lean and efficient, providing only essential information without explaining concepts Claude already knows. Every section serves a clear purpose with no padding or unnecessary context. | 3 / 3 |
Actionability | Provides concrete, copy-paste ready commands for usage, routing, and verification. The examples are specific and executable, showing real-world use cases with actual command syntax. | 3 / 3 |
Workflow Clarity | The workflow is implicit rather than explicit - it shows usage but doesn't clearly sequence the steps (verify CLI availability → run command → check artifact). For a skill involving external CLI tools, explicit verification-before-execution guidance would strengthen this. | 2 / 3 |
Progressive Disclosure | For a simple, single-purpose skill under 50 lines, the content is well-organized with clear sections (Usage, Routing, Requirements, Artifacts). No external references needed given the scope. | 3 / 3 |
Total | 11 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
48ffaac
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.