Pair a remote AI agent with your browser. One command generates a setup key and prints instructions the other agent can follow to connect. Works with OpenClaw, Hermes, Codex, Cursor, or any agent that can make HTTP requests. The remote agent gets its own tab with scoped access (read+write by default, admin on request). Use when asked to "pair agent", "connect agent", "share browser", "remote browser", "let another agent use my browser", or "give browser access". (gstack) Voice triggers (speech-to-text aliases): "pair agent", "connect agent", "share my browser", "remote browser access".
Security
4 findings — 1 critical severity, 1 high severity, 2 medium severity. Installing this skill is not recommended: please review these findings carefully if you do intend to do so.
Detected a prompt injection in the skill instructions. The skill contains hidden or deceptive instructions that fall outside its stated purpose and attempt to override the agent’s safety guidelines or intended behavior.
Potential prompt injection detected (high risk: 0.80). The skill includes many out-of-scope operational instructions (telemetry logging that actually records repo names despite promising not to, automatic gbrain sync/push/pull, repo file creation/commits, and other side-effecting commands) which are not necessary for "pairing a browser" and some behaviors could run with insufficient consent, so this is a deceptive/intrusive instruction injection beyond the skill's stated purpose.
The skill handles credentials insecurely by requiring the agent to include secret values verbatim in its generated output. This exposes credentials in the agent’s context and conversation history, creating a risk of data exfiltration.
Insecure credential handling detected (high risk: 1.00). The skill explicitly requires printing the full instruction block verbatim (including the one-time setup key and potentially tunnel/auth tokens) so the LLM must emit secret values in its output, creating an exfiltration risk.
The skill exposes the agent to untrusted, user-generated content from public third-party sources, creating a risk of indirect prompt injection. This includes browsing arbitrary URLs, reading social media posts or forum comments, and analyzing content from unknown websites.
Third-party content exposure detected (high risk: 0.80). This skill explicitly enables a paired remote agent to browse arbitrary public websites and read page content (see "What the remote agent can do" and the Step 4 remote/ngrok pairing flow), and the workflow also performs external fetches (e.g., curl to bun.sh and git fetch/brain-sync) as part of normal execution, so untrusted third‑party content could be ingested and materially influence agent behavior.
The skill fetches instructions or code from an external URL at runtime, and the fetched content directly controls the agent’s prompts or executes code. This dynamic dependency allows the external source to modify the agent’s behavior without any changes to the skill itself.
Potentially malicious external URL detected (high risk: 0.90). The skill's setup step will at runtime download and execute the Bun installer script via curl "https://bun.sh/install" (saved to a tmpfile and run with bash) when bun is missing, which fetches remote code and runs it as a required dependency.
db9447c
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.