CtrlK
BlogDocsLog inGet started
Tessl Logo

databricks-core-workflow-a

Execute Databricks primary workflow: Delta Lake ETL pipelines. Use when building data ingestion pipelines, implementing medallion architecture, or creating Delta Lake transformations. Trigger with phrases like "databricks ETL", "delta lake pipeline", "medallion architecture", "databricks data pipeline", "bronze silver gold".

80

Quality

77%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/saas-packs/databricks-pack/skills/databricks-core-workflow-a/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a well-structured skill description with strong trigger terms and clear 'when to use' guidance. Its main weakness is that the 'what' portion could be more specific about the concrete actions performed (e.g., schema enforcement, merge operations, table creation). The explicit trigger phrases section is a notable strength for skill selection.

Suggestions

Add more specific concrete actions to the capability description, e.g., 'create Delta tables, implement merge/upsert operations, enforce schemas, optimize with Z-ordering' to improve specificity.

DimensionReasoningScore

Specificity

Names the domain (Databricks, Delta Lake ETL pipelines) and mentions some actions like 'data ingestion pipelines', 'medallion architecture', and 'Delta Lake transformations', but doesn't list multiple concrete specific actions (e.g., no mention of specific operations like reading from sources, writing to Delta tables, schema enforcement, merge/upsert operations).

2 / 3

Completeness

Clearly answers both 'what' (Execute Databricks Delta Lake ETL pipelines) and 'when' (explicit 'Use when' clause with triggers, plus a 'Trigger with phrases like' section providing additional explicit guidance).

3 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms including 'databricks ETL', 'delta lake pipeline', 'medallion architecture', 'databricks data pipeline', 'bronze silver gold'. These are terms users would naturally use when requesting this type of work.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with a clear niche around Databricks and Delta Lake specifically. Terms like 'medallion architecture', 'bronze silver gold', and 'delta lake' are very specific to this domain and unlikely to conflict with other skills.

3 / 3

Total

11

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a strong, highly actionable skill with production-quality code examples covering the full Databricks Delta Lake ETL workflow. Its main weaknesses are the lack of explicit validation checkpoints integrated into the workflow steps (e.g., verifying Bronze data before proceeding to Silver) and the monolithic structure that could benefit from splitting the DLT alternative and job scheduling into separate referenced files. The error handling table is a nice touch but the skill could be tighter overall.

Suggestions

Integrate validation checkpoints directly into the workflow (e.g., after Step 1, add a verification query to confirm Bronze ingestion succeeded before proceeding to Silver; after Step 2, verify MERGE results).

Split the DLT pipeline (Step 5) and job scheduling (Step 6) into separate referenced files since they represent alternative/supplementary approaches, reducing the main skill's token footprint.

Move the 'Quick Pipeline Validation' SQL from the Examples section into the workflow as an explicit final validation step with a feedback loop (e.g., 'If row counts don't flow as expected, check X').

DimensionReasoningScore

Conciseness

The skill is mostly efficient with good code examples, but includes some unnecessary explanatory text (e.g., 'Auto Loader handles schema inference, evolution, and scales to millions of files') and the architecture ASCII diagram, while nice, adds tokens for something Claude can infer. The overall length (~200 lines of code) is substantial but largely justified by the multi-step pipeline.

2 / 3

Actionability

Every step includes fully executable, copy-paste-ready PySpark/SQL code with realistic table names, proper imports, and complete configurations. The code covers streaming ingestion, MERGE upserts, DLT declarations, and job scheduling with the Databricks SDK — all concrete and specific.

3 / 3

Workflow Clarity

The 6-step sequence is clearly laid out with logical dependencies (Bronze → Silver → Gold → Maintenance → DLT → Schedule), but there are no explicit validation checkpoints between steps. The 'Quick Pipeline Validation' SQL is buried in Examples rather than integrated into the workflow. For a multi-step ETL pipeline involving data transformations and MERGE operations, missing inline validation/verification steps caps this at 2.

2 / 3

Progressive Disclosure

The skill has good section structure and links to external Databricks docs, but the content is monolithic — the DLT pipeline (Step 5) is essentially an alternative approach that could be a separate reference file, and the job scheduling (Step 6) could also be split out. The inline content is quite long for a single SKILL.md without offloading detailed sections.

2 / 3

Total

9

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.