Add and fix Detox E2E tests (smoke and regression) for MetaMask Mobile using withFixtures, Page Objects, and tests/framework. Use when creating a new spec, fixing a failing E2E test, adding page objects and selectors, or adding MetaMetrics analytics expectations (analyticsExpectations).
88
85%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
One source of truth for adding Detox E2E tests to MetaMask Mobile. Applies to: Claude Code (
.claude/commands/e2e-test.md), Cursor, Copilot, Windsurf, and other AI agents.
Before writing or changing any E2E code: read this skill once, then open the reference(s) indicated by the decision tree for your task.
Guides you through adding a new E2E regression or smoke test end-to-end:
tests/tags.js and existing specs in the feature folder)Your job is to figure out whether the user needs to write a new spec, fix a failing test, or add page objects/selectors, then follow the corresponding path and open the relevant reference when that path indicates.
Decision tree — which reference to use:
Task → What do you need?
├─ Write new spec or add test steps
│ → Open references/writing-tests.md (spec structure, templates, FixtureBuilder patterns)
│ → If you need POM/selectors: also open references/page-objects.md
│ → If you need API or feature-flag mocks: also open references/mocking.md
│ → After writing: run lint/tsc, then open references/running-tests.md to run and debug
│
├─ Create or update Page Objects / selectors
│ → Open references/page-objects.md (POM structure, Matchers, Gestures, Assertions, selector conventions)
│ → When writing the spec: open references/writing-tests.md
│
├─ Mock API or feature flags
│ → Open references/mocking.md (testSpecificMock, setupRemoteFeatureFlagsMock, setupMockRequest)
│ → When writing the spec: open references/writing-tests.md
│
├─ MetaMetrics / Segment analytics assertions (`analyticsExpectations` on `withFixtures`)
│ → Open [tests/docs/analytics-e2e.md](../../../tests/docs/analytics-e2e.md) (config shape, teardown order, presets under `tests/helpers/analytics/expectations/`, `runAnalyticsExpectations`)
│ → When wiring a spec: still follow references/writing-tests.md for `withFixtures` usage
│
└─ Run tests, debug failures, or self-review
→ Open references/running-tests.md (build check, detox commands, common failures, retry patterns)Do not read the full reference files until the decision tree or workflow sends you there.
withFixtures — every spec must be wrapped; no exceptionselement(by.id()) in spec filestests/framework/index.ts — never from individual filesdescription to every Gestures.* and Assertions.* callTestHelpers.delay() — use Assertions.* which has auto-retryFixtureBuilder for state — do not set state through UI interactions*.testIds.ts (co-located) or tests/selectors/ (legacy)SmokeE2E, SmokeTrade, SmokePredictions, SmokePerps, SmokeConfirmations, RegressionTrade, RegressionWallet, etc. Check tests/tags.js for the full list and descriptions, and existing specs in the same feature folder to see which tag they use.'opens market details')Step 0 → Understand requirement + choose type (smoke/regression)
Step 1 → Discover / create Page Objects and selectors
Step 2 → Write the spec (withFixtures + POM + correct tag)
Step 3 → Lint + TSC (fix all errors)
Step 4 → Run detox test locally
Step 5 → Iterate (fix → lint → run) until greenDocumentation is split by action. Open only the reference that matches what you are doing.
| Action | File | When to open it |
|---|---|---|
| Writing or updating a spec | references/writing-tests.md | New spec file, spec structure, FixtureBuilder patterns, smoke/regression templates. |
| Page Objects and selectors | references/page-objects.md | Create or update POM classes, selector/testId conventions, Matchers/Gestures/Assertions API. |
| API and feature flag mocking | references/mocking.md | testSpecificMock, setupRemoteFeatureFlagsMock, setupMockRequest, shared mock files. |
| MetaMetrics / analytics expectations | tests/docs/analytics-e2e.md | analyticsExpectations on withFixtures, declarative checks, presets in tests/helpers/analytics/expectations/. |
| Running tests, debugging, fixing failures | references/running-tests.md | Build check, detox run commands, lint/tsc, common failures table, retry patterns, iteration loop. |
bee9b14
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.