CtrlK
BlogDocsLog inGet started
Tessl Logo

databricks-jobs

Develop and deploy Lakeflow Jobs on Databricks. Use when creating data engineering jobs with notebooks, Python wheels, or SQL tasks. Invoke BEFORE starting implementation.

81

Quality

77%

Does it follow best practices?

Impact

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/databricks-jobs/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a solid skill description that clearly identifies its niche (Lakeflow Jobs on Databricks), provides explicit trigger guidance with a 'Use when' clause, and includes distinctive terminology that minimizes conflict risk. The main weakness is that the specificity of concrete actions could be expanded—it mentions 'develop and deploy' but doesn't enumerate specific capabilities like configuring schedules, setting cluster policies, or managing task dependencies.

Suggestions

Expand the concrete actions beyond 'develop and deploy' to include specifics like 'configure task dependencies, set schedules, define cluster policies, manage job parameters' to improve specificity.

DimensionReasoningScore

Specificity

Names the domain (Lakeflow Jobs on Databricks) and some actions (develop, deploy, creating jobs with notebooks, Python wheels, SQL tasks), but doesn't list multiple concrete actions like configuring schedules, setting dependencies, monitoring runs, etc.

2 / 3

Completeness

Clearly answers both 'what' (develop and deploy Lakeflow Jobs on Databricks) and 'when' (when creating data engineering jobs with notebooks, Python wheels, or SQL tasks), with an explicit 'Use when' clause and an additional timing directive ('BEFORE starting implementation').

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'Lakeflow Jobs', 'Databricks', 'data engineering jobs', 'notebooks', 'Python wheels', 'SQL tasks'. These cover the main terms a user working in this domain would naturally use.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with specific triggers like 'Lakeflow Jobs', 'Databricks', and the combination of 'Python wheels' and 'SQL tasks' in a job context. Unlikely to conflict with generic coding or data skills.

3 / 3

Total

11

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a solid, actionable skill with concrete YAML configurations and executable commands that cover the key aspects of Lakeflow Jobs development. Its main weaknesses are the lack of error recovery/validation feedback loops in the workflow, some unnecessary content (CLI install instructions duplicated from parent skill, basic Spark code), and the embedded CLAUDE.md template which adds bulk without teaching job-specific skills.

Suggestions

Add error recovery guidance to the Development Workflow: what to check when validation fails, common deployment errors, and how to interpret failed run status output.

Remove the CLI installation instructions from the CLAUDE.md template since they belong in the parent databricks-core skill, and trim the template to reduce token usage.

Remove or significantly shorten the notebook code section—Claude already knows basic Spark read/write/SQL operations; only the dbutils.widgets.get() pattern is novel and worth keeping.

DimensionReasoningScore

Conciseness

Generally efficient but includes some unnecessary content like the full CLAUDE.md/AGENTS.md template (which is boilerplate), CLI installation instructions (which belong in the parent skill), and the project structure diagram could be more compact. The notebook code section explains basic Spark operations Claude already knows.

2 / 3

Actionability

Provides fully executable YAML configurations, concrete bash commands, and copy-paste ready code examples throughout. The scaffolding command, job configuration, scheduling, and deployment workflow are all specific and immediately usable.

3 / 3

Workflow Clarity

The Development Workflow section provides a clear 4-step sequence (validate → deploy → run → check status), but lacks explicit validation checkpoints and error recovery feedback loops. There's no guidance on what to do if validation fails, deployment errors occur, or run status shows failure.

2 / 3

Progressive Disclosure

References the parent 'databricks-core' skill appropriately and includes external documentation links, but the skill itself is somewhat monolithic—the CLAUDE.md template, detailed YAML examples for scheduling, multi-task jobs, and parameters could potentially be split into referenced files. However, without bundle files, the inline approach is reasonable for the content volume.

2 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
databricks/databricks-agent-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.