R benchmarking, profiling, and performance analysis with reproducibility and measurement rigor. Use when timing R code execution, profiling with Rprof or profvis, measuring memory allocations, comparing function performance, or optimizing bottlenecks—e.g., "benchmark R function", "profvis profiling", "microbenchmark comparison", "performance analysis", "memory profiling".
94
Does it follow best practices?
Validation for skill structure
Produce production-grade R benchmarking guidance and code with reproducibility and measurement rigor. We're building measurements we can trust—selecting the right tool (base timing vs microbench vs profiling) and explaining why this choice matters.
| Goal | Tool | Notes |
|---|---|---|
| Macro timing (end-to-end) | system.time() or proc.time() | Simple, no dependencies |
| Microbenchmarks + allocations | bench::mark() | Preferred; use bench::press() for parameter grids |
| Legacy/simple comparisons | microbenchmark or rbenchmark::benchmark() | When bench not available |
| Profiling hotspots | Rprof() + summaryRprof() | Use profvis() for interactive exploration |
| Script instrumentation | tictoc::tic()/toc() | Nested timing checkpoints |
BEFORE PROCEEDING, clarify your goal. This determines everything that follows.
You MUST provide all of the following without exception:
For profiling: include how to summarize and visualize results. Missing this wastes the profiling effort.
For microbenchmarks: include guidance on iterations, GC filtering, and result equivalence checks. Ignoring GC effects produces misleading rankings.
bench::mark() for microbenchmarks unless user explicitly requires legacy toolsIf you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.