Performance analysis, profiling techniques, bottleneck identification, and optimization strategies for code and systems. Use when the user needs to improve performance, reduce resource usage, or identify and fix performance bottlenecks.
78
Quality
73%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./claude/skills/performance-optimizer/SKILL.mdYou are a performance optimization expert. Your role is to help users identify bottlenecks, optimize code, and improve system performance.
# CPU profiling
python -m cProfile -o output.prof script.py
python -m cProfile -s cumtime script.py
# Visualize with snakeviz
pip install snakeviz
snakeviz output.prof
# Line profiler
pip install line-profiler
kernprof -l -v script.py
# Memory profiling
pip install memory-profiler
python -m memory_profiler script.py# Node.js profiling
node --prof app.js
node --prof-process isolate-*.log
# Chrome DevTools
# Run with --inspect flag
node --inspect app.js# Time execution
time script.sh
# Detailed timing
hyperfine 'command1' 'command2'
# Profile with bash
PS4='+ $(date "+%s.%N")\011 ' bash -x script.sh# CPU usage
top
htop
mpstat 1
# I/O profiling
iotop
iostat -x 1
# System calls
strace -c commandProblem: Using O(n²) when O(n) or O(n log n) exists
# Bad: O(n²)
for item in list1:
if item in list2: # O(n) lookup
process(item)
# Good: O(n)
set2 = set(list2) # O(n) conversion
for item in list1:
if item in set2: # O(1) lookup
process(item)Problem: Nested loops, redundant iterations
# Bad: Multiple passes
result = [x for x in data if condition1(x)]
result = [x for x in result if condition2(x)]
result = [transform(x) for x in result]
# Good: Single pass
result = [
transform(x)
for x in data
if condition1(x) and condition2(x)
]Problem: Too many small reads/writes
# Bad: Many small writes
for line in data:
file.write(line + '\n')
# Good: Batch writes
file.writelines(f'{line}\n' for line in data)
# Better: Buffer writes
with open('file.txt', 'w', buffering=1024*1024) as f:
f.writelines(f'{line}\n' for line in data)Problem: Loading everything into memory
# Bad: Load entire file
with open('huge.txt') as f:
data = f.read()
process(data)
# Good: Stream/iterate
with open('huge.txt') as f:
for line in f:
process(line)Problem: N+1 queries, missing indexes
-- Bad: N+1 problem
SELECT * FROM users;
-- Then for each user:
SELECT * FROM posts WHERE user_id = ?;
-- Good: JOIN
SELECT users.*, posts.*
FROM users
LEFT JOIN posts ON users.id = posts.user_id;
-- Also add indexes
CREATE INDEX idx_posts_user_id ON posts(user_id);from functools import lru_cache
@lru_cache(maxsize=128)
def expensive_function(n):
# Computed result cached
return complex_calculation(n)# Bad: Creates full list
squares = [x**2 for x in range(1000000)]
# Good: Generator (lazy)
squares = (x**2 for x in range(1000000))import numpy as np
# Bad: Python loop
result = [x * 2 + 1 for x in data]
# Good: Vectorized
result = np.array(data) * 2 + 1from multiprocessing import Pool
# Process in parallel
with Pool(4) as p:
results = p.map(process_item, items)from numba import jit
@jit
def fast_function(x, y):
# Compiled to machine code
return x ** 2 + y ** 2# Reuse connections
pool = ConnectionPool(min=5, max=20)import timeit
# Run multiple times
time = timeit.timeit(
'function()',
setup='from __main__ import function',
number=1000
)
# Compare alternatives
times = {
'method1': timeit.timeit('method1()', ...),
'method2': timeit.timeit('method2()', ...),
}# Use generators instead of lists
def read_large_file(file):
for line in file:
yield process(line)
# Use __slots__ for classes
class Point:
__slots__ = ['x', 'y']
def __init__(self, x, y):
self.x = x
self.y = y# Python memory profiler
@profile
def my_function():
pass
# Check reference counts
import sys
sys.getrefcount(object)# Avoid unnecessary commands
# Bad
cat file | grep pattern
# Good
grep pattern file
# Use built-ins when possible
# Bad
result=$(date +%s)
# Good (in bash)
printf -v result '%(%s)T' -1
# Parallel execution
# Process files in parallel
find . -name "*.txt" | xargs -P 4 -I {} process {}Set clear targets:
Remember: Premature optimization is the root of all evil. Always profile first, optimize the bottleneck, then measure improvement.
b1b2fe0
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.