CtrlK
BlogDocsLog inGet started
Tessl Logo

lading-optimize-hunt

Coordinates optimization attempts. Captures baselines, implements changes, invokes review, and records outcomes.

60

1.37x
Quality

43%

Does it follow best practices?

Impact

88%

1.37x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./.claude/skills/lading-optimize-hunt/SKILL.md
SKILL.md
Quality
Evals
Security

Optimization Hunt

Coordinates optimization attempts: captures baselines, implements changes, invokes review, and records all outcomes.

Role: Coordinator and Recorder

Hunt is the coordinator and recorder — it captures baselines, implements changes, hands off to review, and records all outcomes.

Hunt does NOT:

  • Run post-change benchmarks (review does this)
  • Make pass/fail decisions on optimizations (review does this)

Hunt DOES:

  • Record all verdicts and outcomes in .claude/skills/lading-optimize-hunt/assets/db.yaml after review returns

Phase 0: Pre-flight

Run /lading-preflight.


Phase 1: Find Target

Run /lading-optimize-find-target.

It returns a YAML block with 6 fields: pattern, technique, target, file, bench, fingerprint - Print it out.


Phase 2: Establish Baseline

CRITICAL: Capture baseline metrics BEFORE making any code changes.

Identify the Benchmark Target

Use the bench and fingerprint fields from find-target's output — they are repo-relative paths ready to use:

BENCH=<bench field without extension>   # e.g. from "lading_payload/benches/syslog.rs" use "--bench syslog"
PAYLOADTOOL_CONFIG=<fingerprint field>  # e.g. "ci/fingerprints/syslog/lading.yaml"

Stage 1: Clear previous benchmarks

Clear any previously captured baselines so stale data cannot contaminate this run.

rm -f /tmp/criterion-baseline.log /tmp/baseline.json /tmp/baseline-mem.txt
rm -rf target/criterion

Stage 2: Micro-benchmark Baseline

Run only the benchmark for your target:

cargo criterion --bench "$BENCH" 2>&1 | tee /tmp/criterion-baseline.log

Stage 3: Macro-benchmark Baseline

Use the matching fingerprint config:

cargo build --release --bin payloadtool
hyperfine --warmup 3 --runs 30 --export-json /tmp/baseline.json \
  "./target/release/payloadtool $PAYLOADTOOL_CONFIG"

./target/release/payloadtool "$PAYLOADTOOL_CONFIG" --memory-stats 2>&1 | tee /tmp/baseline-mem.txt

Baseline captured. These files will be consumed by review:

  • /tmp/criterion-baseline.log — micro-benchmark baseline
  • /tmp/baseline.json — macro-benchmark timing baseline
  • /tmp/baseline-mem.txt — macro-benchmark memory baseline

CRITICAL: All benchmarks must complete before continuing.


Phase 3: Implement

Make ONE change. Keep it focused and minimal.

Before proceeding, ALL changes must pass:

ci/validate

No exceptions. If ci/validate fails, fix the issue before continuing.

If ci/validate repeatedly fails on a pre-existing bug (not caused by your change), document it and stop.


Phase 4: Hand Off to Review

Run /lading-optimize-review with the target fields as positional arguments:

/lading-optimize-review <bench> <fingerprint> <file> <target> <technique>

Where:

  • <bench> — benchmark name from find-target's bench field, without path or extension (e.g. trace_agent)
  • <fingerprint> — repo-relative path from find-target's fingerprint field
  • <file> — repo-relative path from find-target's file field
  • <target> — function name from find-target's target field
  • <technique> — technique from find-target's technique field

It returns a YAML report. Print it out.


Phase 5: Recording

After review returns its YAML report, record the result. Every outcome MUST be recorded.

Step 1: Write the Report

Write review's YAML report verbatim to .claude/skills/lading-optimize-hunt/assets/db/<id>.yaml. Do not modify, reformat, or add to the report content — it is the authoritative record from review.

Step 2: Update the Index

Add an entry to .claude/skills/lading-optimize-hunt/assets/db.yaml following the format in .claude/skills/lading-optimize-hunt/assets/index.template.yaml.


Repository
DataDog/lading
Last updated
Created

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.