Profile programs at the function/method level to identify performance hotspots, bottlenecks, and optimization opportunities. Records execution time, memory usage, and call frequency for each interval. Generates actionable recommendations and visualizations. Use when users need to (1) analyze program performance, (2) identify slow functions or bottlenecks, (3) optimize execution time or memory usage, (4) profile Python, Java, or C/C++ programs with test cases or workload scenarios, or (5) generate performance reports with flame graphs and recommendations.
Install with Tessl CLI
npx tessl i github:ArabelaTso/Skills-4-SE --skill interval-profiling-performance-analyzer87
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an excellent skill description that hits all the marks. It provides specific concrete actions, comprehensive trigger terms that users would naturally use, explicit 'Use when' guidance with multiple scenarios, and a clear distinctive niche in performance profiling. The description uses proper third-person voice throughout.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Profile programs at the function/method level', 'identify performance hotspots, bottlenecks', 'Records execution time, memory usage, and call frequency', 'Generates actionable recommendations and visualizations'. | 3 / 3 |
Completeness | Clearly answers both what (profile programs, record metrics, generate recommendations) AND when with explicit 'Use when users need to...' clause covering 5 specific trigger scenarios. | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural terms users would say: 'performance', 'hotspots', 'bottlenecks', 'optimization', 'execution time', 'memory usage', 'profile', 'slow functions', 'Python, Java, C/C++', 'flame graphs', 'performance reports'. | 3 / 3 |
Distinctiveness Conflict Risk | Clear niche focused on function-level profiling with distinct triggers like 'flame graphs', 'call frequency', 'profile Python/Java/C++'. Unlikely to conflict with general code analysis or debugging skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
72%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured skill with excellent actionability and progressive disclosure. The workflow is clear but could benefit from explicit validation steps after profiling runs. Some content (optimization guidelines, general profiling advice) explains concepts Claude already knows, reducing token efficiency.
Suggestions
Add validation checkpoints after profiling runs (e.g., 'Verify profile_results.json was created and contains data before proceeding to visualization')
Remove or condense the 'Optimization Guidelines' section - these are general principles Claude already knows
Add a brief verification step in the workflow to confirm profiling captured meaningful data before analysis
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is mostly efficient but includes some unnecessary explanation (e.g., 'Profile first, optimize second' and other optimization guidelines that Claude already knows). The troubleshooting section and some notes could be tightened. | 2 / 3 |
Actionability | Provides fully executable bash commands for all three languages, specific script paths, clear argument explanations, and concrete examples. Commands are copy-paste ready with expected outputs documented. | 3 / 3 |
Workflow Clarity | Steps are clearly sequenced (1-6) with good structure, but lacks explicit validation checkpoints. No feedback loops for error recovery - if profiling fails or produces unexpected results, there's no 'verify and retry' guidance. | 2 / 3 |
Progressive Disclosure | Excellent structure with clear overview, well-signaled one-level-deep references to detailed documentation (profiling-tools.md, optimization-patterns.md), and content appropriately organized into sections for different use cases. | 3 / 3 |
Total | 10 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.