This skill should be used when the user asks to "chat with AI", "ask Olly", "ask the agent", "send message to AI", "continue a chat", "follow up on chat", "get artifact", "download artifact", "list artifacts", "retrieve generated content", "AI-generated charts", "AI analysis", "conversational observability", "natural language query", or wants to interact with the Coralogix Observability Agent (Olly) using the cx CLI.
60
51%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/cx-olly/SKILL.mdQuality
Discovery
37%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is heavily skewed toward trigger terms while almost completely neglecting to explain what the skill actually does. It reads as a list of when-to-use phrases without describing concrete capabilities like sending queries, retrieving observability data, managing chat sessions, or downloading generated artifacts. The Coralogix/Olly specificity helps with distinctiveness but many trigger terms are too generic.
Suggestions
Add concrete capability descriptions before the trigger list, e.g., 'Sends natural language queries to the Coralogix Observability Agent (Olly) via the cx CLI, manages chat sessions, retrieves AI-generated analysis artifacts including charts and reports.'
Narrow overly generic trigger terms like 'chat with AI' and 'ask the agent' by qualifying them, e.g., 'chat with Olly AI' or 'ask the observability agent', to reduce conflict risk with other AI/chat skills.
Restructure to clearly separate 'what it does' from 'when to use it' using an explicit 'Use when...' clause after the capability description.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description lists no concrete actions or capabilities. It only describes trigger phrases without explaining what the skill actually does (e.g., what happens when you 'chat with AI' or 'get artifact'). There are no specific operations like 'sends messages to the Coralogix Observability Agent and retrieves analysis results'. | 1 / 3 |
Completeness | The description answers 'when' extensively but almost entirely fails to answer 'what does this do'. There is no explanation of the skill's capabilities, outputs, or concrete actions. The rubric states a missing 'what' or 'when' should result in a low score, and the 'what' is essentially absent beyond vague references to interacting with 'Olly'. | 1 / 3 |
Trigger Term Quality | The description is essentially a comprehensive list of trigger terms and natural phrases users would say, including variations like 'chat with AI', 'ask Olly', 'ask the agent', 'send message to AI', 'continue a chat', 'get artifact', 'download artifact', 'list artifacts', 'natural language query', and 'AI analysis'. These cover many natural user phrasings. | 3 / 3 |
Distinctiveness Conflict Risk | The mention of 'Coralogix Observability Agent (Olly)' and 'cx CLI' provides some distinctiveness, but generic terms like 'chat with AI', 'ask the agent', 'AI analysis', and 'natural language query' are extremely broad and could easily conflict with any other AI chat or agent skill. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid, actionable skill with clear executable examples and good structural organization via tables and headers. Its main weaknesses are minor verbosity in introductory/explanatory text and the lack of validation checkpoints or error-handling guidance in workflows. The related skills references at the end are a nice touch for navigation.
Suggestions
Add brief error handling notes to workflows (e.g., what to do if timeout is exceeded, how to verify a chat-id is valid, or what error output looks like).
Trim the introductory paragraph and artifact description text — Claude doesn't need to be told what an observability agent or artifacts are; jump straight to commands.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Generally efficient but includes some unnecessary explanations (e.g., describing what Olly can do in the intro, explaining what artifacts are). The artifact output behavior section and some flag descriptions could be tighter. However, most content earns its place. | 2 / 3 |
Actionability | Provides fully executable, copy-paste ready bash commands for every operation. Includes concrete examples for starting chats, continuing conversations, listing/getting artifacts, and piping to jq for scripting. Flag usage is clearly demonstrated. | 3 / 3 |
Workflow Clarity | The 'Investigate an issue' workflow shows a clear multi-step sequence, but there are no validation checkpoints or error handling guidance. What happens if the chat ID is invalid? What if artifact download fails? For an agent interaction skill these gaps are less critical than for destructive operations, but the workflow could benefit from noting how to verify responses or handle timeouts. | 2 / 3 |
Progressive Disclosure | Content is well-structured with clear sections and tables, and references related skills at the end. However, with no bundle files, everything is inline in a single file. The content is moderately long (~120 lines) and the artifact behavior details and workflow examples could potentially be split out, though it's borderline acceptable for a single-file skill. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
defdc4d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.