Log queries, filtering, pattern analysis, and log correlation. Search and analyze application and infrastructure logs.
52
57%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/dt-obs-logs/SKILL.mdQuality
Discovery
50%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a clear domain (log analysis) and lists several relevant capabilities, but it lacks an explicit 'Use when...' clause which limits its effectiveness for skill selection. The trigger terms are adequate but miss common user phrasings and specific log-related terminology that would improve matching accuracy.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about searching logs, debugging errors, investigating incidents, or analyzing log output.'
Include more natural trigger terms users would say, such as 'error logs', 'stack traces', 'debug', 'log files', 'syslog', 'tail', 'grep logs', '.log files'.
Add specific concrete actions beyond generic terms, e.g., 'parse structured log fields, aggregate error counts, trace requests across services, identify anomalous log patterns'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (logs) and several actions (queries, filtering, pattern analysis, correlation, search, analyze), but these are somewhat generic action words rather than deeply specific concrete operations like 'parse structured log fields' or 'aggregate error rates by service'. | 2 / 3 |
Completeness | The 'what' is reasonably covered (log queries, filtering, pattern analysis, correlation, search and analyze logs), but there is no explicit 'when' clause or trigger guidance (e.g., 'Use when the user asks about searching logs or debugging application issues'). Per rubric guidelines, missing 'Use when...' caps completeness at 2. | 2 / 3 |
Trigger Term Quality | Includes relevant keywords like 'log queries', 'filtering', 'pattern analysis', 'log correlation', 'application logs', 'infrastructure logs', and 'search'. However, it misses common user-facing terms like 'error logs', 'debug', 'stack trace', 'log files', 'syslog', 'tail logs', or specific tools/formats users might mention. | 2 / 3 |
Distinctiveness Conflict Risk | The focus on logs is a reasonably distinct domain, but terms like 'search', 'filtering', and 'pattern analysis' could overlap with data analysis or monitoring skills. Adding more log-specific triggers (e.g., file extensions, tool names, or log-specific scenarios) would improve distinctiveness. | 2 / 3 |
Total | 8 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid, actionable log analysis skill with excellent executable DQL examples covering common use cases. Its main weaknesses are moderate verbosity (redundant introductory sections, some over-explanation) and a monolithic structure that could benefit from splitting reference material into separate files. The workflows would also benefit from validation checkpoints for common failure scenarios.
Suggestions
Remove or consolidate the 'What This Skill Covers' and 'When to Use This Skill' sections — Claude can infer applicability from the content itself.
Add validation/troubleshooting guidance to workflows: what to check when queries return empty results, how to verify entity names resolve correctly, and how to handle performance issues with large time ranges.
Extract the 'Key Functions' reference table and 'Common Patterns' section into separate referenced files to reduce the main skill's token footprint.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill includes some unnecessary sections like 'What This Skill Covers' and 'When to Use This Skill' which largely duplicate each other and tell Claude things it can infer. The 'Key Concepts' section explains basic data model fields that could be more terse. However, the code examples themselves are lean and the reference tables are efficient. | 2 / 3 |
Actionability | Every workflow and pattern includes fully executable DQL queries that are copy-paste ready. The examples cover a wide range of real scenarios (error rate calculation, JSON parsing, pattern analysis, content search) with concrete field names and functions. | 3 / 3 |
Workflow Clarity | The three core workflows list clear steps and include executable examples, but they lack validation checkpoints. For log analysis, there's no guidance on what to do when queries return no results, return too many results, or when entity names don't resolve — these are common failure modes that should have feedback loops. | 2 / 3 |
Progressive Disclosure | The content is well-structured with clear headers and sections, and references related skills at the end. However, at ~180 lines it's quite long and monolithic — the 'Common Patterns' section and 'Key Functions' reference could be split into separate files. The related skills are mentioned but not linked with paths. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
4991356
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.