This skill enables Claude to analyze network latency and optimize request patterns within an application. It helps identify bottlenecks and suggest improvements for faster and more efficient network communication. Use this skill when the user asks to "analyze network latency", "optimize request patterns", or when facing performance issues related to network requests. It focuses on identifying serial requests that can be parallelized, opportunities for request batching, connection pooling improvements, timeout configuration adjustments, and DNS resolution enhancements. The skill provides concrete suggestions for reducing latency and improving overall network performance.
Install with Tessl CLI
npx tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill analyzing-network-latency68
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillAgent success when using this skill
Validation for skill structure
Request parallelization and batching
All requests identified
100%
100%
Serial pattern flagged
100%
100%
Parallelization recommended
100%
100%
Specific calls grouped
100%
100%
Batching opportunity identified
0%
0%
Batching recommendation given
0%
0%
Latency anomaly noted
100%
100%
Concrete code or pseudocode
100%
100%
Expected latency improvement
100%
100%
Dependency ordering respected
100%
100%
Without context: $0.1702 · 1m 29s · 8 turns · 9 in / 3,665 out tokens
With context: $0.3491 · 2m · 16 turns · 47 in / 4,839 out tokens
Connection pooling and timeout configuration
Requests enumerated
100%
100%
Connection overhead identified
100%
100%
Connection pooling recommended
100%
100%
Pooling code example
100%
100%
Timeout problem identified
0%
100%
Timeout recommendation given
0%
100%
Timeout code example
0%
100%
Latency pattern analysis
100%
100%
Concrete improvement estimate
100%
100%
No deprecated libraries
100%
100%
Without context: $0.2168 · 1m 37s · 10 turns · 11 in / 4,647 out tokens
With context: $0.4668 · 2m 25s · 22 turns · 55 in / 6,846 out tokens
DNS resolution and multi-host latency analysis
All hosts enumerated
100%
100%
DNS overhead identified
100%
100%
DNS enhancement recommended
0%
100%
Per-host latency patterns noted
62%
75%
Serial pattern across hosts flagged
100%
100%
Parallelization across hosts
100%
100%
Connection reuse per host
100%
100%
Concrete implementation example
100%
100%
Anomaly or bottleneck called out
100%
62%
Expected improvement described
100%
100%
Without context: $0.2248 · 1m 36s · 8 turns · 9 in / 4,551 out tokens
With context: $0.3673 · 2m 24s · 17 turns · 52 in / 5,802 out tokens
Parallelization via async code modification
Serial pattern identified
100%
100%
Independence stated
100%
100%
Parallel execution used
100%
100%
No dependency violations
100%
100%
Connection/session reuse
100%
100%
Timeout preserved or improved
100%
100%
Same public interface
100%
100%
Modified code file produced
100%
100%
Quantitative latency estimate
100%
100%
Bottleneck explanation
100%
100%
Without context: $0.2319 · 1m 28s · 16 turns · 17 in / 2,986 out tokens
With context: $0.4176 · 1m 53s · 25 turns · 170 in / 4,833 out tokens
Latency analysis and anomaly detection from trace data
All services enumerated
100%
100%
Per-service latency breakdown
100%
100%
Outlier identified: billing spike
100%
100%
Outlier identified: analytics spike
100%
100%
Duplicate call pattern flagged
100%
100%
Parallelization opportunity noted
100%
100%
Recommendations ranked or prioritized
100%
100%
Quantitative savings estimates
100%
100%
Batching recommendation
100%
100%
Timeout recommendation
100%
100%
Without context: $0.4515 · 2m 28s · 18 turns · 19 in / 8,091 out tokens
With context: $0.4542 · 2m 23s · 19 turns · 52 in / 7,168 out tokens
Request batching for bulk API operations
Per-item pattern identified
100%
100%
Batch endpoint used
100%
100%
Batch size limit respected
100%
100%
Batch size explicit
100%
100%
Session/connection reuse
100%
100%
Timeout configuration retained
100%
100%
Same function signature
100%
100%
Call count: before
100%
100%
Call count: after
100%
100%
Latency improvement estimate
100%
100%
Without context: $0.2478 · 1m 35s · 16 turns · 16 in / 3,635 out tokens
With context: $0.3158 · 1m 55s · 17 turns · 17 in / 4,278 out tokens
Holistic network optimization in Node.js
All requests enumerated
100%
100%
Serial pattern identified
100%
100%
Parallelization recommended
100%
100%
Independence argued correctly
100%
100%
Connection pooling recommended
0%
100%
Timeout configuration recommended
100%
100%
Parallel execution code example
100%
100%
Connection pooling code example
0%
100%
No dependency violation
100%
100%
Quantitative latency improvement
100%
100%
Optimization categories coverage
50%
100%
Without context: $0.1657 · 1m 25s · 10 turns · 11 in / 3,027 out tokens
With context: $0.4015 · 2m 21s · 21 turns · 52 in / 6,299 out tokens
Log-based latency anomaly detection
All services enumerated
100%
100%
Per-service latency stats
100%
100%
Payment spike anomaly flagged
100%
100%
Catalog slow pattern flagged
100%
100%
Serial call pattern identified
100%
100%
Parallelization recommended
100%
100%
Timeout recommendation
100%
100%
Recommendations ranked
100%
100%
Quantitative savings estimate
100%
100%
Connection reuse mentioned
50%
100%
Without context: $0.3110 · 2m 9s · 11 turns · 12 in / 6,059 out tokens
With context: $0.5634 · 3m 30s · 25 turns · 25 in / 9,187 out tokens
Bulk API batching with connection management
Per-user pattern identified
100%
100%
Batch endpoint used
100%
100%
Batch size limit respected
100%
100%
Batch size as named constant
100%
100%
Session/connection reuse
0%
0%
Timeout configuration retained
100%
100%
Same function signature
100%
100%
Call count: before
100%
100%
Call count: after
100%
100%
Latency improvement estimate
100%
100%
Without context: $0.1881 · 1m 22s · 12 turns · 13 in / 2,402 out tokens
With context: $0.3655 · 1m 53s · 22 turns · 367 in / 4,110 out tokens
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.