CtrlK
BlogDocsLog inGet started
Tessl Logo

databricks-core-workflow-a

Execute Databricks primary workflow: Delta Lake ETL pipelines. Use when building data ingestion pipelines, implementing medallion architecture, or creating Delta Lake transformations. Trigger with phrases like "databricks ETL", "delta lake pipeline", "medallion architecture", "databricks data pipeline", "bronze silver gold".

80

Quality

77%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/saas-packs/databricks-pack/skills/databricks-core-workflow-a/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a well-structured skill description with excellent trigger terms and completeness. It clearly identifies both what the skill does and when to use it, with explicit trigger phrases. The main weakness is that the specificity of concrete actions could be improved—it describes the domain well but doesn't enumerate specific operations or capabilities within Delta Lake ETL pipelines.

Suggestions

Add more specific concrete actions to improve specificity, e.g., 'Reads from various sources, writes to Delta tables, implements merge/upsert operations, enforces schema evolution, and manages partitioning.'

DimensionReasoningScore

Specificity

Names the domain (Databricks, Delta Lake ETL pipelines) and mentions some actions like 'data ingestion pipelines', 'medallion architecture', and 'Delta Lake transformations', but doesn't list multiple concrete specific actions (e.g., no mention of specific operations like reading from sources, writing to Delta tables, schema enforcement, merge/upsert operations).

2 / 3

Completeness

Clearly answers both 'what' (Execute Databricks Delta Lake ETL pipelines) and 'when' (explicit 'Use when' clause with triggers, plus a 'Trigger with phrases like' section providing additional explicit guidance).

3 / 3

Trigger Term Quality

Includes strong natural trigger terms that users would actually say: 'databricks ETL', 'delta lake pipeline', 'medallion architecture', 'databricks data pipeline', 'bronze silver gold'. These cover common variations and natural phrasing well.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with specific niche terms like 'Databricks', 'Delta Lake', 'medallion architecture', and 'bronze silver gold' that are unlikely to conflict with other skills. The combination of platform and pattern is very specific.

3 / 3

Total

11

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a strong, highly actionable skill with production-quality executable code covering the full Databricks Delta Lake ETL workflow. Its main weaknesses are the lack of explicit validation checkpoints between pipeline stages (e.g., verifying data quality before proceeding) and the monolithic structure that packs substantial content into a single file. The error handling table and validation query are valuable additions.

Suggestions

Add explicit validation checkpoints between steps, e.g., 'Verify bronze ingestion: SELECT COUNT(*) FROM bronze table; confirm > 0 rows before proceeding to Silver' with a feedback loop for failures.

Consider splitting DLT pipeline (Step 5) and job scheduling (Step 6) into separate referenced files to reduce the main SKILL.md size and improve progressive disclosure.

Add a validation step after the Gold layer replaceWhere overwrite to confirm partition-level data integrity, since this is a destructive operation.

DimensionReasoningScore

Conciseness

The skill is mostly efficient with executable code examples, but includes some unnecessary explanatory text (e.g., 'Auto Loader handles schema inference, evolution, and scales to millions of files') and the architecture diagram, while helpful, adds tokens for something Claude can infer. The overall length (~200 lines) is substantial but largely justified by the multi-step workflow.

2 / 3

Actionability

Every step includes fully executable, copy-paste-ready PySpark/SQL code with realistic table names, proper imports, and complete configurations. The error handling table provides specific solutions, and the validation query at the end is directly runnable.

3 / 3

Workflow Clarity

The six steps are clearly sequenced with a logical Bronze > Silver > Gold progression, and the pipeline validation SQL at the end serves as a verification checkpoint. However, there are no explicit validation/feedback loops between steps (e.g., 'verify bronze row count before proceeding to silver', 'if MERGE fails, check X and retry'), which is important for this kind of multi-step data pipeline with destructive overwrites in the Gold layer.

2 / 3

Progressive Disclosure

The content is well-structured with clear headers and sections, but it's a monolithic document with ~200 lines of inline code that could benefit from splitting (e.g., DLT pipeline and job scheduling could be separate reference files). The 'Next Steps' reference to another workflow and external resource links are good, but the main content is dense for a single SKILL.md.

2 / 3

Total

9

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.