CtrlK
BlogDocsLog inGet started
Tessl Logo

click-target

Find and click a target object in XR. Use when testing UI interactions, clicking buttons, or verifying interactable elements work correctly.

61

Quality

72%

Does it follow best practices?

Impact

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./.claude/skills/click-target/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

67%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description is structurally sound with a clear 'what' and 'when' clause, making it functionally complete. However, it lacks specificity in the concrete actions it performs and could benefit from more XR-specific trigger terms to reduce overlap with general UI testing skills. The description is concise and uses proper third-person voice.

Suggestions

Add more XR-specific trigger terms such as 'VR', 'AR', 'mixed reality', 'raycast', 'spatial UI' to improve discoverability and reduce conflict with general UI testing skills.

List more specific concrete actions beyond 'find and click' — e.g., 'performs raycasts to locate objects, simulates pointer clicks, validates hit targets' to increase specificity.

DimensionReasoningScore

Specificity

Names the domain (XR) and a primary action ('find and click a target object'), but doesn't list multiple specific concrete actions. It mentions 'testing UI interactions, clicking buttons, verifying interactable elements' but these are more trigger contexts than distinct capabilities.

2 / 3

Completeness

Clearly answers both 'what' ('Find and click a target object in XR') and 'when' ('Use when testing UI interactions, clicking buttons, or verifying interactable elements work correctly') with an explicit 'Use when...' clause.

3 / 3

Trigger Term Quality

Includes some relevant keywords like 'XR', 'UI interactions', 'clicking buttons', 'interactable elements', but misses common variations users might say such as 'VR', 'AR', 'mixed reality', 'tap', 'select', 'press button', 'raycast', or specific XR framework terms.

2 / 3

Distinctiveness Conflict Risk

The XR context provides some distinctiveness, but 'testing UI interactions' and 'clicking buttons' are quite broad and could overlap with general UI testing or web automation skills. The XR qualifier helps but isn't strongly reinforced with specific XR terminology.

2 / 3

Total

9

/

12

Passed

Implementation

77%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured, highly actionable skill that provides a clear multi-step workflow with specific tool calls, parameters, and validation checkpoints. Its main strengths are the concrete numeric values for adjustments and the explicit feedback loops for error recovery. Minor weaknesses include some slightly verbose explanations and the lack of external references for advanced scenarios, though for a skill of this size the single-file approach is reasonable.

Suggestions

Trim redundant explanatory comments like 'This orients the headset to face the target' since the tool name xr_look_at already conveys this.

Consider consolidating the Tips section into the relevant workflow steps to reduce duplication (e.g., the PanelUI button offset tip is mentioned in both step 6 and Tips).

DimensionReasoningScore

Conciseness

The content is mostly efficient and avoids explaining basic concepts, but some bullet points are somewhat verbose (e.g., explaining what positionRelativeToXROrigin is for, or that 'this orients the headset to face the target'). Some tips repeat guidance already in the workflow steps.

2 / 3

Actionability

Every step specifies the exact MCP tool to call with specific parameters, device names, and concrete numeric values for offsets. The example provides real coordinates and expected console log output, making this highly actionable and executable.

3 / 3

Workflow Clarity

The 8-step workflow is clearly sequenced with explicit validation checkpoints (screenshot verification in step 4, console log verification in step 8) and feedback loops (retry with adjustments if click missed, move closer if target not visible). Error recovery is addressed at multiple points.

3 / 3

Progressive Disclosure

The content is well-organized with clear sections (Workflow, Tips, Example), but everything is in a single file. The tips section could potentially be separated, and there are no references to external files for advanced scenarios or detailed API documentation for the MCP tools used.

2 / 3

Total

10

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
facebook/immersive-web-sdk
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.