Automatically instruments source code to collect runtime information such as function calls, branch decisions, variable values, and execution traces while preserving original program semantics. Use when users need to: (1) Add logging or tracing to code for debugging, (2) Collect runtime execution data for analysis, (3) Monitor function calls and control flow, (4) Track variable values during execution, (5) Generate execution traces for testing or profiling. Supports Python, Java, JavaScript, and C/C++ with configurable instrumentation levels.
Install with Tessl CLI
npx tessl i github:ArabelaTso/Skills-4-SE --skill code-instrumentation-generator86
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an excellent skill description that hits all the key criteria. It provides specific concrete actions, includes natural trigger terms developers would use, explicitly states both what the skill does and when to use it with a numbered list of scenarios, and carves out a distinct niche around code instrumentation. The description uses proper third-person voice throughout.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'collect runtime information such as function calls, branch decisions, variable values, and execution traces' and 'Add logging or tracing', 'Collect runtime execution data', 'Monitor function calls and control flow', 'Track variable values'. Also specifies supported languages. | 3 / 3 |
Completeness | Clearly answers both what ('instruments source code to collect runtime information...') AND when with explicit 'Use when users need to:' clause followed by five specific trigger scenarios. Fully satisfies the completeness requirement. | 3 / 3 |
Trigger Term Quality | Includes natural keywords users would say: 'logging', 'tracing', 'debugging', 'runtime', 'execution traces', 'function calls', 'variable values', 'profiling', and specific language names (Python, Java, JavaScript, C/C++). Good coverage of terms developers naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | Clear niche focused on code instrumentation for runtime data collection. The specific focus on 'instrumentation', 'execution traces', 'branch decisions', and 'preserving original program semantics' distinguishes it from general debugging or logging skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides solid, actionable code examples for instrumentation across multiple languages, which is its primary strength. However, it suffers from verbosity in the workflow section, repetitive examples, and lacks validation checkpoints to verify instrumentation correctness. The content would benefit from being split into a concise overview with references to detailed language-specific guides.
Suggestions
Add explicit validation steps in the workflow (e.g., 'Run instrumented code with test input and verify output matches original program behavior')
Consolidate the four nearly-identical language examples into a single pattern description with a table showing language-specific syntax differences
Move advanced features, configuration examples, and language-specific details to separate reference files linked from the main skill
Remove explanatory text in the workflow that describes concepts Claude already knows (e.g., 'Understand the code structure and identify instrumentation points')
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill contains some unnecessary verbosity, particularly in the workflow section which explains concepts Claude already understands (like 'Understand the code structure'). The language-specific examples are repetitive, showing nearly identical patterns four times. | 2 / 3 |
Actionability | Provides fully executable code examples across four languages with copy-paste ready instrumentation patterns. The examples are concrete, complete, and demonstrate real instrumentation techniques including function entry/exit, variable tracking, and branch coverage. | 3 / 3 |
Workflow Clarity | Steps are listed but lack explicit validation checkpoints. The workflow describes what to do but doesn't include verification steps to confirm instrumentation is correct or that semantic preservation is actually achieved. Missing feedback loops for error recovery. | 2 / 3 |
Progressive Disclosure | Content is reasonably structured with clear sections, but the document is monolithic at ~300 lines. Advanced features, language-specific patterns, and configuration examples could be split into separate reference files. No external file references are provided. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.