ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads.
44
31%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/clickhouse-io/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies ClickHouse as the target domain which provides some distinctiveness, but it reads more like a topic list than an actionable skill description. It lacks specific concrete actions, natural trigger term variations, and critically missing a 'Use when...' clause to guide skill selection.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about ClickHouse queries, table design, MergeTree engines, or analytical database optimization.'
List specific concrete actions such as 'Design MergeTree table schemas, optimize analytical queries, configure materialized views, troubleshoot ClickHouse performance issues.'
Include natural trigger term variations users might say: 'ClickHouse', 'CH', 'columnar database', 'OLAP queries', 'MergeTree', '.clickhouse', 'analytical database'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (ClickHouse database) and some general action areas (query optimization, analytics, data engineering best practices), but doesn't list specific concrete actions like 'write MergeTree table definitions' or 'optimize GROUP BY queries'. | 2 / 3 |
Completeness | Describes what the skill covers at a high level but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, a missing 'Use when...' clause caps completeness at 2, and the 'what' is also weak, so this scores 1. | 1 / 3 |
Trigger Term Quality | Includes 'ClickHouse', 'query optimization', 'analytics', and 'data engineering' which are relevant keywords, but misses common user variations like 'columnar database', 'OLAP', 'MergeTree', 'materialized views', or 'analytical queries'. | 2 / 3 |
Distinctiveness Conflict Risk | 'ClickHouse' is a distinct technology that provides some niche specificity, but the broad terms 'analytics', 'data engineering best practices', and 'query optimization' could overlap with general SQL, database, or data engineering skills. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
29%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill provides excellent executable code examples across many ClickHouse use cases, making it highly actionable. However, it is excessively long and monolithic, explaining concepts Claude already knows (what ClickHouse is, what column-oriented storage means), and lacks any workflow sequencing or validation steps for multi-step operations like ETL pipelines or schema migrations. The content would benefit greatly from being split into focused reference files with a concise overview.
Suggestions
Remove the Overview section explaining what ClickHouse is and its key features — Claude already knows this. Cut the 'When to Activate' list or reduce to 2-3 key triggers.
Split into multiple files: keep SKILL.md as a concise overview with links to separate files like SCHEMA_PATTERNS.md, QUERY_OPTIMIZATION.md, DATA_PIPELINES.md, and ANALYTICS_QUERIES.md.
Add explicit validation/verification workflows for multi-step operations — e.g., after bulk inserts verify row counts, after creating materialized views verify data flows correctly, after ETL runs compare source/target counts.
Remove generic analytics query patterns (retention, funnel, cohort) that are standard SQL with minimal ClickHouse-specific value, or move them to a separate EXAMPLES.md file.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The overview section explains what ClickHouse is and lists basic features (column-oriented storage, parallel execution, etc.) that Claude already knows. The 'When to Activate' section is verbose. The document is ~350 lines with significant redundancy (e.g., querying materialized views shown twice with near-identical SQL). Many patterns are generic analytics queries (retention, funnel, cohort) that don't add ClickHouse-specific value beyond standard SQL. | 1 / 3 |
Actionability | The skill provides fully executable SQL and TypeScript code throughout — CREATE TABLE statements, INSERT patterns, monitoring queries, and ETL examples are all copy-paste ready with concrete column names, types, and realistic data patterns. | 3 / 3 |
Workflow Clarity | There are no sequenced multi-step workflows with validation checkpoints. The ETL pattern is a loose code sketch without error handling or validation. The bulk insert pattern has no verification step. For a skill involving data pipeline operations and schema migrations, the absence of validation/feedback loops is a significant gap. | 1 / 3 |
Progressive Disclosure | This is a monolithic wall of content (~350 lines) with no references to external files. Table design, query optimization, insertion patterns, materialized views, monitoring, analytics queries, pipeline patterns, and best practices are all inlined. Content like common analytics queries and CDC patterns should be split into separate reference files. | 1 / 3 |
Total | 6 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
5df943e
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.