CtrlK
BlogDocsLog inGet started
Tessl Logo

integrate

Add Olakai monitoring to existing AI code — wrap your LLM client, configure custom KPIs, and validate the integration end-to-end

69

Quality

65%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./content/olakai/skills/integrate/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

67%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description is strong on specificity and distinctiveness due to the named product (Olakai) and concrete actions listed. However, it lacks an explicit 'Use when...' clause, which caps completeness, and uses second person voice ('wrap your LLM client') which is penalized. Trigger term coverage could be broader to capture users who might describe their need without knowing the product name.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user wants to add Olakai monitoring, observability, or tracing to their AI/LLM application.'

Switch from second person ('wrap your LLM client') to third person ('wraps the LLM client') to match the expected voice.

Include additional natural trigger terms like 'observability', 'tracing', 'LLM logging', or 'AI analytics' to improve discoverability.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'wrap your LLM client', 'configure custom KPIs', and 'validate the integration end-to-end'. These are clear, actionable steps.

3 / 3

Completeness

Clearly answers 'what' (add Olakai monitoring by wrapping LLM client, configuring KPIs, validating integration), but lacks an explicit 'Use when...' clause or equivalent trigger guidance for when Claude should select this skill.

2 / 3

Trigger Term Quality

Includes relevant terms like 'Olakai', 'monitoring', 'LLM client', 'KPIs', and 'integration', but misses common variations users might say such as 'observability', 'tracing', 'logging', 'AI monitoring', or 'LLM analytics'. The product name 'Olakai' is a strong trigger but limits discoverability for generic monitoring requests.

2 / 3

Distinctiveness Conflict Risk

The specific product name 'Olakai' combined with the specific domain of LLM monitoring creates a clear niche that is unlikely to conflict with other skills.

3 / 3

Total

10

/

12

Passed

Implementation

62%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill excels in actionability and workflow clarity, providing executable code, concrete CLI commands, and a thorough validation flow with error recovery. However, it is significantly too long and verbose, explaining motivational context Claude doesn't need, repeating patterns across languages/frameworks, and inlining content that should be in separate referenced files. Trimming redundancy and splitting into focused sub-documents would dramatically improve token efficiency.

Suggestions

Remove the 'Why Custom KPIs Are Essential' section entirely — Claude doesn't need motivation, just instructions on how to configure KPIs.

Move framework-specific integrations (Next.js, FastAPI), edge cases (streaming, error handling, non-OpenAI providers), and the KPI formula reference into separate referenced files to reduce the main skill to ~150 lines.

Eliminate duplicate code examples — show one language (e.g., TypeScript) inline and reference a companion file for the Python equivalent, rather than showing both everywhere.

Remove the 'Quick Reference' section at the end, which duplicates the Quick Start section almost verbatim.

DimensionReasoningScore

Conciseness

The skill is extremely verbose at ~450+ lines. It explains concepts Claude already knows (what KPIs are, why monitoring matters, what streaming is), includes a 'Why Custom KPIs Are Essential' motivational section, repeats the same patterns across TypeScript/Python multiple times, and provides framework-specific examples (Next.js, FastAPI) that add bulk without unique instructional value. The 'Quick Reference' at the end essentially duplicates earlier content.

1 / 3

Actionability

The skill provides fully executable, copy-paste ready code examples in both TypeScript and Python, concrete CLI commands for every configuration step, and specific JSON output examples showing correct vs incorrect states. Every step has concrete, runnable guidance.

3 / 3

Workflow Clarity

The multi-step integration process is clearly sequenced (Steps 1-5), and the 'Test-Validate-Iterate Cycle' section provides an explicit validation flow with a decision tree for debugging. The feedback loop (trigger → fetch → validate → fix → retry) is well-defined with specific commands at each checkpoint.

3 / 3

Progressive Disclosure

The skill references external documentation (https://app.olakai.ai/llms.txt) but dumps all content—quick start, detailed guide, framework integrations, edge cases, KPI formulas, quick reference—into a single monolithic file. Framework-specific integrations, edge cases, and the KPI formula reference could easily be split into separate referenced files to reduce cognitive load.

2 / 3

Total

9

/

12

Passed

Validation

72%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation8 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (661 lines); consider splitting into references/ and linking

Warning

metadata_version

'metadata.version' is missing

Warning

metadata_field

'metadata' should map string keys to string values

Warning

Total

8

/

11

Passed

Repository
andrewyng/context-hub
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.