Measure startup, rendering, memory, jank, vitals, logs, and crash signals for Android apps with actionable traces.
69
62%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/android-performance-observability/SKILL.mdQuality
Discovery
82%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong description with excellent specificity and domain-relevant trigger terms that Android developers would naturally use. Its main weakness is the absence of an explicit 'Use when...' clause, which would help Claude know precisely when to select this skill. The description is concise, uses third person voice correctly, and carves out a distinct niche.
Suggestions
Add a 'Use when...' clause such as 'Use when the user asks about Android app performance, profiling, benchmarking, ANRs, frame drops, or app launch time optimization.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Measure startup, rendering, memory, jank, vitals, logs, and crash signals' with the qualifier 'actionable traces'. These are concrete, well-defined performance measurement domains. | 3 / 3 |
Completeness | Clearly answers 'what does this do' (measure various Android performance metrics with actionable traces), but lacks an explicit 'Use when...' clause or equivalent trigger guidance, which caps this dimension at 2 per the rubric. | 2 / 3 |
Trigger Term Quality | Includes strong natural keywords users would say: 'startup', 'rendering', 'memory', 'jank', 'vitals', 'logs', 'crash signals', 'Android apps', 'traces'. These are terms Android developers naturally use when discussing performance profiling. | 3 / 3 |
Distinctiveness Conflict Risk | Highly specific to Android app performance profiling with distinct terms like 'jank', 'vitals', 'crash signals', and 'actionable traces'. This is a clear niche unlikely to conflict with other skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
42%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill is well-structured as an overview with good progressive disclosure and clear handoff points, but it falls short on actionability—the core weakness. It reads more like a decision framework or checklist than a skill that teaches Claude how to actually execute profiling tasks. The examples are superficial (a grep command and a debug build command) rather than demonstrating real profiling workflows with executable code.
Suggestions
Add concrete, executable examples for the primary use cases: a Macrobenchmark setup snippet, a Baseline Profile generation command sequence, and a Perfetto trace capture command with interpretation guidance.
Replace the abstract workflow steps with specific command sequences including validation checkpoints, e.g., 'Run `./gradlew :macrobenchmark:connectedCheck` → verify median startup < X ms → if regression, capture Perfetto trace with `adb shell perfetto ...`'.
Make the Examples section demonstrate real profiling scenarios with expected outputs rather than trivial grep/build commands that don't teach profiling.
Consolidate Guardrails and Anti-Patterns sections—they convey overlapping messages and could be tightened into a single concise section.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is reasonably efficient but includes some sections that are more philosophical than actionable (e.g., Guardrails and Anti-Patterns overlap significantly in message). The workflow section describes general principles rather than specific steps, adding bulk without proportional value. | 2 / 3 |
Actionability | The skill lacks concrete, executable code or commands. The 'Examples' section provides grep and gradle commands but they are trivial and don't demonstrate actual profiling workflows. There are no executable Macrobenchmark configurations, no Perfetto trace commands, no Baseline Profile setup code—just abstract guidance about what tools to pick. | 1 / 3 |
Workflow Clarity | The workflow lists a logical sequence (classify → measure → pick tool → change one thing → compare), but it lacks validation checkpoints, specific commands at each step, and feedback loops for when measurements are noisy or inconclusive. For a skill involving complex multi-step profiling, the absence of explicit verification steps is a gap. | 2 / 3 |
Progressive Disclosure | The skill appropriately references `references/patterns.md` and `references/scenarios.md` for deeper content, signals handoff skills clearly, and keeps the main file as an overview. References are one level deep and well-signaled. | 3 / 3 |
Total | 8 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_field | 'metadata' should map string keys to string values | Warning |
Total | 10 / 11 Passed | |
c5bf673
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.