Z.AI CLI providing: - Vision: image/video analysis, OCR, UI-to-code, error diagnosis (GLM-4.6V) - Search: real-time web search with domain/recency filtering - Reader: web page to markdown extraction - Repo: GitHub code search and reading via ZRead - Tools: MCP tool discovery and raw calls - Code: TypeScript tool chaining Use for visual content analysis, web search, page reading, or GitHub exploration. Requires Z_AI_API_KEY.
Overall
score
84%
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
77%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is well-structured with strong specificity and completeness, clearly listing capabilities and when to use them. However, the broad scope covering multiple domains (vision, search, reading, code) creates potential overlap with more specialized skills, and some trigger terms are too technical for natural user queries.
Suggestions
Replace technical jargon (GLM-4.6V, MCP, ZRead) with user-friendly terms or add natural language alternatives users would actually say
Consider narrowing the 'Use for' clause to emphasize the unique value proposition that distinguishes this from standalone vision, search, or GitHub skills
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions across several domains: image/video analysis, OCR, UI-to-code, error diagnosis, web search with filtering, markdown extraction, GitHub code search, MCP tool discovery, and TypeScript tool chaining. | 3 / 3 |
Completeness | Clearly answers 'what' with detailed capability list and explicitly answers 'when' with 'Use for visual content analysis, web search, page reading, or GitHub exploration.' Also notes the API key requirement. | 3 / 3 |
Trigger Term Quality | Includes some natural terms like 'image', 'video', 'OCR', 'web search', 'GitHub', but uses technical jargon (GLM-4.6V, MCP, ZRead) and misses common user phrases like 'read webpage', 'analyze screenshot', or 'search the web'. | 2 / 3 |
Distinctiveness Conflict Risk | While the Z.AI branding and specific tool names (GLM-4.6V, ZRead) create some distinctiveness, the broad scope covering vision, search, reading, and code could overlap with dedicated image analysis, web search, or GitHub skills. | 2 / 3 |
Total | 10 / 12 Passed |
Implementation
87%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-crafted CLI reference skill that maximizes information density while remaining highly actionable. The table format for commands and concrete examples make it immediately useful. Minor weakness is the lack of troubleshooting guidance or validation steps for setup issues.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely lean and efficient. No unnecessary explanations of what CLI tools are or how npx works. Every line provides actionable information, and the table format maximizes information density. | 3 / 3 |
Actionability | Provides fully executable, copy-paste ready commands covering all major use cases. Examples are concrete with realistic arguments (file paths, URLs, search queries) and demonstrate actual usage patterns. | 3 / 3 |
Workflow Clarity | This is primarily a reference skill rather than a multi-step workflow, but the setup -> commands -> quick start flow is logical. However, there's no validation/verification guidance (e.g., what to do if doctor fails, how to verify API key is working). | 2 / 3 |
Progressive Disclosure | Excellent structure with clear overview, table for quick reference, examples for common cases, and explicit pointer to advanced.md for complex features. One-level-deep reference is well-signaled. | 3 / 3 |
Total | 11 / 12 Passed |
Validation
75%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 12 / 16 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
description_trigger_hint | Description may be missing an explicit 'when to use' trigger hint (e.g., 'Use when...') | Warning |
metadata_version | 'metadata' field is not a dictionary | Warning |
license_field | 'license' field is missing | Warning |
body_steps | No step-by-step structure detected (no ordered list); consider adding a simple workflow | Warning |
Total | 12 / 16 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.