R benchmarking, profiling, and performance analysis with reproducibility and measurement rigor. Use when timing R code execution, profiling with Rprof or profvis, measuring memory allocations, comparing function performance, or optimizing bottlenecks—e.g., "benchmark R function", "profvis profiling", "microbenchmark comparison", "performance analysis", "memory profiling".
92
92%
Does it follow best practices?
Impact
88%
1.54xAverage score across 3 eval scenarios
Passed
No known issues
bench::mark microbenchmark with GC and equivalence
Uses bench::mark
0%
100%
Multiple iterations
62%
100%
Result equivalence check
50%
100%
GC filtering addressed
20%
100%
mem_alloc column interpreted
0%
100%
itr/sec or gc/sec interpreted
0%
100%
GC pitfall called out
50%
100%
Tool choice rationale
0%
100%
Relative comparison
0%
0%
Fixed inputs / reproducibility
100%
100%
No I/O in benchmark
100%
100%
Rprof and profvis profiling workflow
Rprof used
100%
100%
interval = 0.01
100%
100%
event = 'cpu'
100%
100%
summaryRprof called
100%
100%
by.self / by.total explained
100%
100%
profvis used
100%
0%
Elapsed vs CPU pitfall
100%
100%
Sampling interval caveat
100%
100%
Tool choice rationale
62%
50%
Profiling goal stated
71%
71%
No overly small interval
100%
100%
bench::press parameter grid with reproducibility
bench::press used
0%
100%
Parameter grid defined
0%
100%
set.seed called
100%
100%
Platform/version recorded
100%
100%
Relative comparison
0%
0%
Distribution over single value
100%
100%
bench::mark inside press
0%
100%
check = TRUE or equivalence noted
0%
100%
No I/O in benchmark expressions
100%
100%
No single-run comparisons
100%
100%
GC or memory noted
0%
100%
b74de5e
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.