CtrlK
BlogDocsLog inGet started
Tessl Logo

chrome-devtools

Browser automation, debugging, and performance analysis using Puppeteer CLI scripts. Use for automating browsers, taking screenshots, analyzing performance, monitoring network traffic, web scraping, form automation, and JavaScript debugging.

100

Quality

100%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Risky

Do not use without reviewing

SKILL.md
Quality
Evals
Security

Chrome DevTools Agent Skill

Browser automation via executable Puppeteer scripts. All scripts output JSON for easy parsing.

Quick Start

Installation

Step 1: Install System Dependencies (Linux/WSL only)

On Linux/WSL, Chrome requires system libraries. Install them first:

cd .claude/skills/chrome-devtools/scripts
./install-deps.sh  # Auto-detects OS and installs required libs

Supports: Ubuntu, Debian, Fedora, RHEL, CentOS, Arch, Manjaro

macOS/Windows: Skip this step (dependencies bundled with Chrome)

Step 2: Install Node Dependencies

npm install  # Installs puppeteer, debug, yargs

Test

node navigate.js --url https://example.com
# Output: {"success": true, "url": "https://example.com", "title": "Example Domain"}

Available Scripts

All scripts are in .claude/skills/chrome-devtools/scripts/

Script Usage

  • ./scripts/README.md

Core Automation

  • navigate.js - Navigate to URLs
  • screenshot.js - Capture screenshots (full page or element)
  • click.js - Click elements
  • fill.js - Fill form fields
  • evaluate.js - Execute JavaScript in page context

Analysis & Monitoring

  • snapshot.js - Extract interactive elements with metadata
  • console.js - Monitor console messages/errors
  • network.js - Track HTTP requests/responses
  • performance.js - Measure Core Web Vitals + record traces

Usage Patterns

Single Command

cd .claude/skills/chrome-devtools/scripts
node screenshot.js --url https://example.com --output ./docs/screenshots/page.png

Important: Always save screenshots to ./docs/screenshots directory.

Chain Commands (reuse browser)

# Keep browser open with --close false
node navigate.js --url https://example.com/login --close false
node fill.js --selector "#email" --value "user@example.com" --close false
node fill.js --selector "#password" --value "secret" --close false
node click.js --selector "button[type=submit]"

Parse JSON Output

# Extract specific fields with jq
node performance.js --url https://example.com | jq '.vitals.LCP'

# Save to file
node network.js --url https://example.com --output /tmp/requests.json

Common Workflows

Web Scraping

node evaluate.js --url https://example.com --script "
  Array.from(document.querySelectorAll('.item')).map(el => ({
    title: el.querySelector('h2')?.textContent,
    link: el.querySelector('a')?.href
  }))
" | jq '.result'

Performance Testing

PERF=$(node performance.js --url https://example.com)
LCP=$(echo $PERF | jq '.vitals.LCP')
if (( $(echo "$LCP < 2500" | bc -l) )); then
  echo "✓ LCP passed: ${LCP}ms"
else
  echo "✗ LCP failed: ${LCP}ms"
fi

Form Automation

node fill.js --url https://example.com --selector "#search" --value "query" --close false
node click.js --selector "button[type=submit]"

Error Monitoring

node console.js --url https://example.com --types error,warn --duration 5000 | jq '.messageCount'

Script Options

All scripts support:

  • --headless false - Show browser window
  • --close false - Keep browser open for chaining
  • --timeout 30000 - Set timeout (milliseconds)
  • --wait-until networkidle2 - Wait strategy

See ./scripts/README.md for complete options.

Output Format

All scripts output JSON to stdout:

{
  "success": true,
  "url": "https://example.com",
  ... // script-specific data
}

Errors go to stderr:

{
  "success": false,
  "error": "Error message"
}

Finding Elements

Use snapshot.js to discover selectors:

node snapshot.js --url https://example.com | jq '.elements[] | {tagName, text, selector}'

Troubleshooting

Common Errors

"Cannot find package 'puppeteer'"

  • Run: npm install in the scripts directory

"error while loading shared libraries: libnss3.so" (Linux/WSL)

  • Missing system dependencies
  • Fix: Run ./install-deps.sh in scripts directory
  • Manual install: sudo apt-get install -y libnss3 libnspr4 libasound2t64 libatk1.0-0 libatk-bridge2.0-0 libcups2 libdrm2 libxkbcommon0 libxcomposite1 libxdamage1 libxfixes3 libxrandr2 libgbm1

"Failed to launch the browser process"

  • Check system dependencies installed (Linux/WSL)
  • Verify Chrome downloaded: ls ~/.cache/puppeteer
  • Try: npm rebuild then npm install

Chrome not found

  • Puppeteer auto-downloads Chrome during npm install
  • If failed, manually trigger: npx puppeteer browsers install chrome

Script Issues

Element not found

  • Get snapshot first to find correct selector: node snapshot.js --url <url>

Script hangs

  • Increase timeout: --timeout 60000
  • Change wait strategy: --wait-until load or --wait-until domcontentloaded

Blank screenshot

  • Wait for page load: --wait-until networkidle2
  • Increase timeout: --timeout 30000

Permission denied on scripts

  • Make executable: chmod +x *.sh

Reference Documentation

Detailed guides available in ./references/:

  • CDP Domains Reference - 47 Chrome DevTools Protocol domains
  • Puppeteer Quick Reference - Complete Puppeteer API patterns
  • Performance Analysis Guide - Core Web Vitals optimization

Advanced Usage

Custom Scripts

Create custom scripts using shared library:

import { getBrowser, getPage, closeBrowser, outputJSON } from './lib/browser.js';
// Your automation logic

Direct CDP Access

const client = await page.createCDPSession();
await client.send('Emulation.setCPUThrottlingRate', { rate: 4 });

See reference documentation for advanced patterns and complete API coverage.

External Resources

  • Puppeteer Documentation
  • Chrome DevTools Protocol
  • Scripts README
Repository
einverne/dotfiles
Last updated
Created

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.