Gemini CLI for one-shot Q&A, summaries, and generation.
71
58%
Does it follow best practices?
Impact
99%
1.32xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./openclaw/skills/gemini/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description is too terse and lacks explicit trigger guidance ('Use when...'). While 'Gemini CLI' provides some distinctiveness, the listed capabilities (Q&A, summaries, generation) are vague and broadly applicable, making it hard for Claude to confidently select this skill over others in a large skill set.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks to run Gemini CLI, invoke Gemini from the command line, or needs a one-shot LLM query via Gemini.'
Make capabilities more concrete—specify what kinds of Q&A, summaries, or generation (e.g., 'Runs Gemini CLI to answer questions, summarize files or text, and generate content in a single non-interactive invocation').
Include natural trigger terms users might say, such as 'gemini command', 'ask gemini', 'gemini prompt', or 'gemini one-shot'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the tool (Gemini CLI) and some actions (Q&A, summaries, generation), but these actions are broad and not very concrete—'generation' is vague, and there's no detail about what kind of content or how. | 2 / 3 |
Completeness | It partially answers 'what' (one-shot Q&A, summaries, generation) but completely lacks a 'Use when...' clause or any explicit trigger guidance, which per the rubric caps completeness at 2, and the 'what' is also weak enough to warrant a 1. | 1 / 3 |
Trigger Term Quality | Includes 'Gemini CLI' which is a useful trigger term, and 'Q&A', 'summaries', 'generation' are somewhat relevant keywords. However, it misses natural user phrases like 'ask Gemini', 'gemini query', 'one-shot prompt', or 'LLM CLI'. | 2 / 3 |
Distinctiveness Conflict Risk | 'Gemini CLI' is a fairly distinct trigger that narrows the domain, but 'Q&A, summaries, and generation' are extremely broad and could overlap with many other skills that handle text generation or summarization. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
85%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a concise, well-structured skill for a simple CLI tool. It excels at brevity and organization, providing just enough to get started. The main weakness is incomplete actionability—it could benefit from specifying available model names and a slightly richer example showing piped input or common use patterns.
Suggestions
Add 1-2 common model names (e.g., `gemini-2.5-pro`, `gemini-2.5-flash`) so Claude knows valid values for `--model`.
Include an example of piping input (e.g., `cat file.txt | gemini "Summarize this"`) since one-shot Q&A often involves file content.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Very lean and efficient. No unnecessary explanations of what Gemini is or how CLIs work. Every line provides actionable information Claude wouldn't inherently know. | 3 / 3 |
Actionability | Provides concrete commands that are copy-paste ready, but lacks key details like available model names, what output formats are supported, and what extension commands exist. The guidance is real but incomplete. | 2 / 3 |
Workflow Clarity | This is a simple, single-purpose skill (one-shot CLI usage). The single action is unambiguous, and the auth note provides a clear recovery path. No multi-step destructive operations require validation checkpoints. | 3 / 3 |
Progressive Disclosure | For a skill under 50 lines with no need for external references, the content is well-organized into logical sections (quick start, extensions, notes) that are easy to scan. | 3 / 3 |
Total | 11 / 12 Passed |
Validation
72%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 8 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_version | 'metadata.version' is missing | Warning |
metadata_field | 'metadata' should map string keys to string values | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 8 / 11 Passed | |
09cce3e
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.