Capture and automate macOS UI with the Peekaboo CLI.
56
43%
Does it follow best practices?
Impact
77%
1.92xAverage score across 3 eval scenarios
Risky
Do not use without reviewing
Optimize this skill with Tessl
npx tessl skill review --optimize ./openclaw/skills/peekaboo/SKILL.mdQuality
Discovery
22%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description is too terse and vague, failing to enumerate specific capabilities of the Peekaboo CLI or provide explicit trigger conditions for when Claude should select this skill. While the tool name 'Peekaboo' and 'macOS UI' provide some distinctiveness, the lack of concrete actions and a 'Use when...' clause significantly weakens its utility for skill selection.
Suggestions
Add a 'Use when...' clause specifying trigger scenarios, e.g., 'Use when the user asks to take screenshots, inspect UI elements, or automate macOS desktop interactions.'
List specific concrete actions the skill supports, such as 'take screenshots, list UI elements, click buttons, read accessibility tree, automate mouse/keyboard input.'
Include natural trigger terms users might say, like 'screenshot', 'screen capture', 'UI automation', 'accessibility', 'desktop automation', or 'GUI interaction'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description mentions 'capture and automate macOS UI' which is vague. It doesn't list specific concrete actions like 'take screenshots', 'click buttons', 'read screen elements', or 'automate workflows'. 'Capture' and 'automate' are broad verbs without detail. | 1 / 3 |
Completeness | The description partially answers 'what' (capture and automate macOS UI) but is vague, and completely lacks a 'when' clause. There is no 'Use when...' or equivalent trigger guidance, which per the rubric should cap completeness at 2, but the 'what' is also weak, so this scores a 1. | 1 / 3 |
Trigger Term Quality | Includes some relevant terms like 'macOS UI', 'Peekaboo CLI', and 'capture'/'automate', but misses natural user terms like 'screenshot', 'screen capture', 'UI automation', 'click', 'accessibility', or 'GUI testing'. 'Peekaboo' is a tool name users might mention but is niche. | 2 / 3 |
Distinctiveness Conflict Risk | The mention of 'Peekaboo CLI' and 'macOS UI' provides some distinctiveness, but 'capture and automate' could overlap with other automation or screenshot tools. The niche is somewhat defined by the tool name but not strongly delineated by use cases. | 2 / 3 |
Total | 6 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid reference skill with excellent actionability — the examples are concrete, varied, and immediately usable. Its main weaknesses are the inlined command catalog that inflates token usage and the lack of validation/error-recovery steps in workflows. Restructuring the feature list as a separate reference and adding checkpoint guidance would significantly improve it.
Suggestions
Move the exhaustive feature/command catalog to a separate REFERENCE.md and link to it, keeping only the most essential commands inline to improve conciseness.
Add validation checkpoints to multi-step workflows, e.g., checking that `peekaboo see` found the expected element before clicking, or verifying permissions succeeded before proceeding.
Include brief error-recovery guidance for common failure modes (element not found, permissions denied, wrong window targeted).
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is fairly efficient and avoids explaining basic concepts, but the exhaustive feature listing (Core/Interaction/System/Vision sections) is essentially a command catalog that could be referenced separately. Some of this bulk duplicates what `peekaboo --help` already provides. | 2 / 3 |
Actionability | The skill provides numerous concrete, copy-paste-ready bash commands covering the most common workflows. Examples are specific, executable, and cover a wide range of use cases from see->click->type flows to window management and keyboard input. | 3 / 3 |
Workflow Clarity | The 'See -> click -> type' example demonstrates a clear multi-step sequence, and the quickstart provides a logical progression. However, there are no explicit validation checkpoints or error recovery steps — e.g., what to do if permissions fail, if an element ID isn't found, or if a click doesn't land correctly. | 2 / 3 |
Progressive Disclosure | The content is reasonably structured with clear sections (Features, Quickstart, Parameters, Examples), but the full command catalog is inlined rather than referenced from a separate file. The feature list alone takes significant space that could be offloaded to a reference document. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
72%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 8 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_version | 'metadata.version' is missing | Warning |
metadata_field | 'metadata' should map string keys to string values | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 8 / 11 Passed | |
09cce3e
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.