CtrlK
BlogDocsLog inGet started
Tessl Logo

databricks-spark-declarative-pipelines

Creates, configures, and updates Databricks Lakeflow Spark Declarative Pipelines (SDP/LDP) using serverless compute. Handles data ingestion with streaming tables, materialized views, CDC, SCD Type 2, and Auto Loader ingestion patterns. Use when building data pipelines, working with Delta Live Tables, ingesting streaming data, implementing change data capture, or when the user mentions SDP, LDP, DLT, Lakeflow pipelines, streaming tables, or bronze/silver/gold medallion architectures.

94

Quality

92%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an excellent skill description that hits all the marks. It provides specific concrete actions, comprehensive trigger terms covering both acronyms and full names, an explicit 'Use when...' clause, and is highly distinctive in its Databricks Lakeflow niche. The description is concise yet thorough, using proper third-person voice throughout.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: creates/configures/updates pipelines, handles data ingestion with streaming tables, materialized views, CDC, SCD Type 2, and Auto Loader ingestion patterns. Very detailed and actionable.

3 / 3

Completeness

Clearly answers both 'what' (creates/configures/updates Databricks Lakeflow pipelines with specific patterns) and 'when' (explicit 'Use when...' clause listing multiple trigger scenarios including building data pipelines, working with DLT, ingesting streaming data, etc.).

3 / 3

Trigger Term Quality

Excellent coverage of natural terms users would say: 'data pipelines', 'Delta Live Tables', 'streaming data', 'change data capture', 'SDP', 'LDP', 'DLT', 'Lakeflow pipelines', 'streaming tables', 'bronze/silver/gold medallion architectures'. Covers both acronyms and full names.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with a clear niche in Databricks Lakeflow/DLT pipelines. The specific technology references (SDP, LDP, DLT, Auto Loader, medallion architecture, serverless compute) make it very unlikely to conflict with other skills.

3 / 3

Total

12

/

12

Passed

Implementation

85%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured, highly actionable skill that serves as an effective routing hub for Databricks SDP pipeline development. Its strengths are the clear workflow decision tree, comprehensive post-run validation with feedback loops, and excellent progressive disclosure to language-specific reference files. The main weakness is moderate verbosity—some sections are repetitive (task routing tables echo the checklist), and the document could be more concise without losing clarity.

Suggestions

Consolidate the 'Required Checklist' table and 'Task-Based Routing' section to eliminate duplication of the same reference file links appearing in both places.

DimensionReasoningScore

Conciseness

The skill is fairly comprehensive but includes some redundancy—the task-based routing tables repeat information already covered in the checklist and workflow sections. The legacy API table and quick reference are efficient, but the overall document could be tightened by ~20-30% without losing information. Some sections like 'Names' explaining SDP=LDP are borderline unnecessary.

2 / 3

Actionability

The skill provides concrete, executable code examples (SQL and Python), specific CLI commands with arguments, JSON configuration examples, and precise tool invocations like `get_table_stats_and_schema`. The decision tables for language selection and workflow choice are immediately actionable.

3 / 3

Workflow Clarity

The skill has excellent workflow structure: a clear decision tree for choosing workflows (A/B/C), a required checklist before writing code, and a thorough 3-step post-run validation process with explicit feedback loops (trace upstream, fix, re-upload, re-run). The validation section includes specific checks for empty tables, wrong counts, and debugging steps.

3 / 3

Progressive Disclosure

The skill is an exemplary hub document with well-organized one-level-deep references to language-specific guides (sql/1-5, python/1-5), general guides (1-4), and external documentation. References are clearly signaled with descriptive labels and organized by task in tables. The main file provides enough context to route correctly without requiring reading all sub-files.

3 / 3

Total

11

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
databricks-solutions/ai-dev-kit
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.