CtrlK
BlogDocsLog inGet started
Tessl Logo

databricks-spark-declarative-pipelines

Creates, configures, and updates Databricks Lakeflow Spark Declarative Pipelines (SDP/LDP) using serverless compute. Handles data ingestion with streaming tables, materialized views, CDC, SCD Type 2, and Auto Loader ingestion patterns. Use when building data pipelines, working with Delta Live Tables, ingesting streaming data, implementing change data capture, or when the user mentions SDP, LDP, DLT, Lakeflow pipelines, streaming tables, or bronze/silver/gold medallion architectures.

94

Quality

92%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an excellent skill description that hits all the marks. It provides specific concrete actions, comprehensive trigger terms covering both acronyms and full names, an explicit 'Use when...' clause, and is highly distinctive in its Databricks Lakeflow niche. The description is concise yet thorough, using proper third-person voice throughout.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: creates/configures/updates pipelines, handles data ingestion with streaming tables, materialized views, CDC, SCD Type 2, and Auto Loader ingestion patterns. Very detailed and actionable.

3 / 3

Completeness

Clearly answers both 'what' (creates/configures/updates Databricks Lakeflow pipelines with specific patterns) and 'when' (explicit 'Use when...' clause listing multiple trigger scenarios including building data pipelines, working with DLT, ingesting streaming data, etc.).

3 / 3

Trigger Term Quality

Excellent coverage of natural terms users would say: 'data pipelines', 'Delta Live Tables', 'streaming data', 'change data capture', 'SDP', 'LDP', 'DLT', 'Lakeflow pipelines', 'streaming tables', 'bronze/silver/gold medallion architectures'. Covers both acronyms and full names.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with a clear niche in Databricks Lakeflow/DLT pipelines. The specific technology references (SDP, LDP, DLT, Auto Loader, medallion architecture, serverless compute) make it very unlikely to conflict with other skills.

3 / 3

Total

12

/

12

Passed

Implementation

85%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured, highly actionable skill that excels at progressive disclosure and workflow clarity. It provides concrete code examples, clear decision trees, and thorough validation steps with feedback loops. The main weakness is moderate verbosity—some sections are redundant (task routing appears in multiple forms) and a few general best-practice statements could be trimmed to respect token budget.

Suggestions

Consolidate the task-based routing: the 'Required Checklist' table and the 'Task-Based Routing' section largely duplicate each other—merge them into a single reference point.

Trim general advice Claude already knows, such as the gold layer guidance about 'It's easier to aggregate further in queries than to recover lost dimensions' and the explanation of what medallion layers do conceptually.

DimensionReasoningScore

Conciseness

The skill is quite long (~300+ lines) with some redundancy—workflow routing is explained multiple times, the task-based routing tables duplicate the checklist table, and the medallion architecture guidance includes some general advice Claude would already know (e.g., 'It's easier to aggregate further in queries than to recover lost dimensions'). However, most content is domain-specific and earns its place.

2 / 3

Actionability

The skill provides concrete, executable SQL and Python examples, specific CLI commands with exact flags, JSON configuration examples, and precise tool invocations (e.g., `get_table_stats_and_schema` with parameters). The legacy-to-modern mapping table and common issues table give copy-paste-ready solutions.

3 / 3

Workflow Clarity

The skill has excellent workflow structure: a clear decision tree for choosing workflows (A/B/C), a required checklist before writing code, and a detailed 3-step post-run validation process with explicit checkpoints (check execution status → validate output data → debug data issues with upstream tracing). The feedback loop for fixing and re-running is explicit.

3 / 3

Progressive Disclosure

Content is well-structured with a clear overview in the main file and one-level-deep references to specific guides (sql/1-syntax-basics.md, python/2-ingestion.md, etc.). The task-based routing tables make navigation easy, and the references are clearly signaled with descriptive labels. Advanced configuration is appropriately deferred to a separate file.

3 / 3

Total

11

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
databricks-solutions/ai-dev-kit
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.