Reviews optimization patches using a 5-persona peer review system. Requires unanimous approval backed by benchmarks.
66
58%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./.claude/skills/lading-optimize-review/SKILL.mdA rigorous 5-persona peer review system for optimization patches in lading. Requires unanimous approval backed by concrete benchmark data. Duplicate Hunter persona prevents redundant work.
Review is the decision-maker. It does NOT record results.
Review judges using benchmarks and 5-persona review, then returns a structured report.
| Outcome | Votes | Action |
|---|---|---|
| APPROVED | 5/5 APPROVE | Return APPROVED report |
| REJECTED | Any REJECT | Return REJECTED report |
This skill requires 5 positional arguments passed by the caller:
| Arg | Field | Example | Used for |
|---|---|---|---|
$ARGUMENTS[0] | bench | trace_agent | cargo criterion --bench flag |
$ARGUMENTS[1] | fingerprint | ci/fingerprints/trace_agent_v04/lading.yaml | payloadtool config path |
$ARGUMENTS[2] | file | lading_payload/src/trace_agent/v04.rs | report + duplicate check |
$ARGUMENTS[3] | target | V04::to_bytes | report |
$ARGUMENTS[4] | technique | buffer-reuse | report + duplicate check |
If any argument is missing -> REJECT. All 5 are required.
Derive the id from the file and technique arguments:
$ARGUMENTS[2] (e.g. lading_payload/src/trace_agent/v04.rs → trace-agent-v04)$ARGUMENTS[4] (e.g. buffer-reuse)- → trace-agent-v04-buffer-reuseUse this id in the report.
Read the baseline benchmark files captured:
/tmp/criterion-baseline.log — micro-benchmark baseline/tmp/baseline.json — macro-benchmark timing baseline/tmp/baseline-mem.txt — macro-benchmark memory baselineIf baseline data is missing -> REJECT. Baselines must be captured before any code change and before this gets invoked.
cargo criterion --bench $ARGUMENTS[0] 2>&1 | tee /tmp/criterion-optimized.logNote: Criterion automatically compares against the last run and reports percentage changes.
Compare results — look for "change:" lines showing improvement/regression.
Example output: time: [1.2345 ms 1.2456 ms 1.2567 ms] change: [-5.1234% -4.5678% -4.0123%]
cargo build --release --bin payloadtool
hyperfine --warmup 3 --runs 30 --export-json /tmp/optimized.json \
"./target/release/payloadtool $ARGUMENTS[1]"
./target/release/payloadtool "$ARGUMENTS[1]" --memory-stats 2>&1 | tee /tmp/optimized-mem.txt--runs 30).claude/skills/lading-optimize-hunt/assets/db.yaml for $ARGUMENTS[2] + $ARGUMENTS[4] comboci/validate and validate that it passes completely.unwrap() or .expect() added (lading MUST NOT panic)mod.rs files (per CLAUDE.md)use statements at file top (not inside functions)"{index}" not "{}")If the optimization touches critical code:
ci/kani lading_throttleci/kani lading_payloadKani constraints:
If Kani fails to run:
| Outcome | Votes | Action |
|---|---|---|
| APPROVED | 5/5 APPROVE | Return APPROVED report |
| REJECTED | Any REJECT | Return REJECTED report |
Duplicates, bugs, correctness issues, and missing benchmarks are all rejections. Describe the specific reason in the report's reason field.
Review does NOT record results and does NOT create files. Return a structured YAML report to the caller.
Fill in the appropriate template and return the completed YAML:
| Verdict | Template |
|---|---|
| approved | .claude/skills/lading-optimize-review/assets/approved.template.yaml |
| rejected | .claude/skills/lading-optimize-review/assets/rejected.template.yaml |
.claude/skills/lading-optimize-review/assets/ directoryid → generated ID (see "Generate Report ID" above)target → $ARGUMENTS[2]:$ARGUMENTS[3] (e.g. lading_payload/src/trace_agent/v04.rs:V04::to_bytes)technique → $ARGUMENTS[4]01ef8d7
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.