Create and review E2E performance tests that measure real user flows on real devices with TimerHelper and PerformanceTracker. Use when creating, editing, or reviewing performance tests, when the user mentions perf tests, timing measurements, performance thresholds, or files in tests/performance/.
100
100%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
For the full reference guide with templates, examples, and decision trees, read reference.md.
measure(), assertions INSIDE measure() — measure app response time, not interactionsTimerHelper per measurable step with platform-specific thresholds { ios: <ms>, android: <ms> }wdio/screen-objects/, never raw selectorsdevice assigned before useperformanceTracker.addTimers() + performanceTracker.attachToTest(testInfo)| Starting condition | Folder |
|---|---|
| User already has wallet (login screen) | tests/performance/login/ |
| Fresh install (onboarding) | tests/performance/onboarding/ |
| Dapp connection needed | tests/performance/mm-connect/ |
import { test } from '../../framework/fixtures/performance';
import TimerHelper from '../../framework/TimerHelper';
import { login } from '../../framework/utils/Flows.js';
import { PerformanceLogin } from '../../tags.performance.js';
test.describe(`${PerformanceLogin}`, () => {
test(
'Descriptive name',
{ tag: '@team-name' },
async ({ device, performanceTracker }, testInfo) => {
ScreenA.device = device;
ScreenB.device = device;
await login(device);
const timer = new TimerHelper(
'Time since the user clicks X until Y is visible',
{ ios: 2000, android: 3000 },
device,
);
await ScreenA.tapButton(); // action OUTSIDE
await timer.measure(() => ScreenB.isVisible()); // assertion INSIDE
performanceTracker.addTimers(timer);
await performanceTracker.attachToTest(testInfo);
},
);
});When editing an existing test in tests/performance/, follow these rules:
{ ios, android } valuesperformanceTracker.addTimers() calladdTimers() — orphaned timers cause silent failurestests/performance/ for usages before changingwdio/screen-objects/, ensure backward compatibility — existing tests must not breakdevice getter/setter patterntests/framework/utils/Flows.js (e.g., login, importSRPFlow)TimerHelper[]screensSetup(device) pattern for tests with many screen objects (see perps-position-management.spec.js)attachToTest(testInfo) is still called at the enddevice assignedmeasure(), assertions INSIDEjest.mock(...) — no mockingimport { test } from 'appwright' — always from fixtures/performancemeasure() — only assertions/waitsdevice assignment on screen objectsgetPasswordForScenario()| Action type | iOS | Android |
|---|---|---|
| Simple screen transition | 500–1500 ms | 600–1800 ms |
| Data loading (API + render) | 1500–5000 ms | 2000–7000 ms |
| Dapp connection (cross-context) | 8000–20000 ms | 12000–30000 ms |
| Quote/swap execution | 7000–9000 ms | 7000–9000 ms |
tests/framework/fixtures/performance/performance-fixture.tstests/framework/TimerHelper.tstests/tags.performance.jstests/framework/utils/Flows.jswdio/screen-objects/tests/performance/login/, tests/performance/onboarding/bee9b14
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.