Review Android UI flows for empty, loading, error, offline, and edge-case behavior before release.
66
58%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/android-ui-states-validation/SKILL.mdQuality
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description is specific and distinctive, clearly naming the platform (Android), the activity (UI flow review), and the exact categories of behavior to check. Its main weakness is the lack of an explicit 'Use when...' clause, which limits completeness, and it could benefit from additional natural trigger terms users might use when requesting this type of review.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when reviewing Android screens before release, checking state handling, or auditing UI for missing empty/loading/error states.'
Include additional natural trigger terms like 'screen states', 'QA checklist', 'UI audit', 'pre-release review', or 'state handling' to improve discoverability.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: reviewing 'empty, loading, error, offline, and edge-case behavior' in 'Android UI flows' before release. These are concrete, well-defined review categories. | 3 / 3 |
Completeness | Clearly answers 'what' (review Android UI flows for specific state behaviors), but lacks an explicit 'Use when...' clause. The 'before release' phrase partially implies when, but it's not an explicit trigger guidance. | 2 / 3 |
Trigger Term Quality | Includes relevant terms like 'Android', 'UI flows', 'empty', 'loading', 'error', 'offline', 'edge-case', and 'release', but misses common user variations like 'state handling', 'screen states', 'QA checklist', 'UI review', or 'pre-release check'. | 2 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with a clear niche: Android-specific, UI flow review, focused on specific state categories (empty/loading/error/offline/edge-case), and scoped to pre-release. Unlikely to conflict with other skills. | 3 / 3 |
Total | 10 / 12 Passed |
Implementation
50%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides a reasonable high-level framework for Android UI state validation with clear structure and some concrete commands, but falls short on actionability—the core workflow is descriptive rather than executable, with no concrete state matrix example or validation criteria. The workflow lacks explicit feedback loops for what is essentially a validation/review process, and the examples reference fictional projects without showing expected outputs or what a completed review looks like.
Suggestions
Add a concrete example of a state matrix (e.g., a markdown table showing screen × state combinations with expected UI behavior) so Claude knows exactly what to produce.
Include explicit validation checkpoints in the workflow, such as 'Verify each cell in the state matrix has a defined UI response before proceeding to step 4' with criteria for pass/fail.
Show expected output for at least one example—what does a completed UI states review look like? Include a sample review artifact or checklist output.
Remove or tighten the 'When To Use' section—the trigger phrases and handoff explanations are verbose and could be reduced to 2-3 bullet points.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably concise but includes some sections that feel padded—e.g., the 'When To Use' section over-explains trigger conditions, and the 'Done Checklist' restates obvious quality expectations. The anti-patterns and guardrails sections, while useful, overlap somewhat. | 2 / 3 |
Actionability | The skill provides some concrete commands (gradle test commands, a Python eval script), but the core workflow steps are abstract and descriptive rather than executable. There's no concrete example of a state matrix, no code showing how to implement or validate a specific state, and the examples reference fictional projects without showing expected outputs. | 2 / 3 |
Workflow Clarity | The 5-step workflow is sequenced and logical, but lacks explicit validation checkpoints or feedback loops. For a skill involving pre-release validation of multiple UI states (a complex, multi-step process), there's no 'if validation fails, do X' step, no concrete criteria for when a state is 'covered,' and the handoff decision in step 5 is vague. | 2 / 3 |
Progressive Disclosure | The content is organized into clear sections with references to handoff skills and official docs, which is good. However, there are no references to deeper companion files (e.g., a STATE_MATRIX_TEMPLATE.md or EXAMPLES.md), and the inline content could benefit from splitting the state matrix methodology into a separate reference rather than leaving it abstract. | 2 / 3 |
Total | 8 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_field | 'metadata' should map string keys to string values | Warning |
Total | 10 / 11 Passed | |
c5bf673
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.