Process images using object detection, classification, and segmentation. Use when requesting "analyze image", "object detection", "image classification", or "computer vision". Trigger with relevant phrases based on skill purpose.
45
33%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/ai-ml/computer-vision-processor/skills/processing-computer-vision-tasks/SKILL.mdQuality
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description covers the basics adequately with an explicit 'Use when' clause and relevant domain keywords, but suffers from moderate vagueness in its capability listing and trigger terms. The final sentence ('Trigger with relevant phrases based on skill purpose') is meaningless filler that adds no value. The description would benefit from more specific actions and broader natural language trigger coverage.
Suggestions
Replace the vague final sentence 'Trigger with relevant phrases based on skill purpose' with additional concrete trigger terms users might naturally say, such as 'identify objects in photo', 'what's in this image', 'label this picture', or mention supported file types like '.jpg', '.png'.
Make the capability list more specific by describing concrete outputs, e.g., 'Draw bounding boxes around detected objects, classify image contents, generate pixel-level segmentation masks' instead of just naming the technique categories.
Add more natural user phrasing variations to improve trigger term coverage, such as 'detect objects in a photo', 'what objects are in this image', 'segment this picture', or 'identify items in image'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (image processing) and lists some actions (object detection, classification, segmentation), but these are fairly high-level categories rather than deeply specific concrete actions like 'draw bounding boxes', 'label objects', or 'output segmentation masks'. | 2 / 3 |
Completeness | Clearly answers both 'what' (process images using object detection, classification, and segmentation) and 'when' (explicit 'Use when' clause with trigger phrases). The final sentence 'Trigger with relevant phrases based on skill purpose' is vague filler but the core triggers are present. | 3 / 3 |
Trigger Term Quality | Includes some relevant keywords like 'analyze image', 'object detection', 'image classification', and 'computer vision', but misses common natural variations users might say such as 'detect objects in photo', 'identify what's in this picture', 'segment image', 'label image', or file extensions like '.jpg', '.png'. | 2 / 3 |
Distinctiveness Conflict Risk | The description is somewhat specific to computer vision tasks, but 'analyze image' is broad enough to overlap with other image-related skills (e.g., image editing, OCR, image generation). The term 'process images' is also quite generic and could conflict with other image-handling skills. | 2 / 3 |
Total | 9 / 12 Passed |
Implementation
0%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is almost entirely generic boilerplate with no actionable, concrete, or executable content. It describes what a computer vision skill would do in abstract terms without providing any actual code, commands, parameters, model names, input/output formats, or specific guidance. Every section reads like a template placeholder rather than a real skill implementation.
Suggestions
Replace the abstract examples with actual executable Python code showing how to call the /process-vision command with specific parameters, model selection, and input/output handling.
Remove generic sections like 'Best Practices', 'Prerequisites', 'Instructions', and 'Output' that contain only placeholder text Claude already knows, and replace with concrete configuration details (supported formats, model backends, parameter options).
Add a concrete workflow with validation steps: e.g., validate image format → select model → call /process-vision with specific args → parse structured output → handle specific error codes.
Provide at least one complete input/output example showing the actual JSON or structured data returned by the /process-vision command, including bounding box format, confidence scores, and segmentation masks.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with extensive padding. Sections like 'Overview', 'How It Works', 'When to Use This Skill', 'Best Practices', 'Integration', 'Prerequisites', 'Instructions', 'Output', 'Error Handling', and 'Resources' are filled with vague, generic content that Claude already knows. Phrases like 'This skill empowers Claude to leverage' and 'Implement robust error handling' add no actionable value. | 1 / 3 |
Actionability | No executable code, no concrete commands, no specific API calls, no actual parameters or configuration examples. The examples describe what the skill 'will do' in abstract terms rather than showing how. The '/process-vision' command is mentioned but never demonstrated with actual syntax, arguments, or expected output format. | 1 / 3 |
Workflow Clarity | The 'How It Works' section lists three abstract steps ('Analyzing the Request', 'Generating Code', 'Executing the Task') with no concrete details. There are no validation checkpoints, no error recovery loops, and no specific sequencing. The 'Instructions' section is entirely generic ('Invoke this skill when the trigger conditions are met'). | 1 / 3 |
Progressive Disclosure | The content is a monolithic wall of vague text with no references to external files, no bundle structure, and no meaningful organization. Sections like 'Resources' just say 'Project documentation' and 'Related skills and commands' without any actual links or file paths. Content is poorly organized with redundant sections (e.g., 'When to Use' overlaps with 'Overview'). | 1 / 3 |
Total | 4 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3a2d27d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.