CtrlK
BlogDocsLog inGet started
Tessl Logo

databricks-jobs

Develop and deploy Lakeflow Jobs on Databricks. Use when creating data engineering jobs with notebooks, Python wheels, or SQL tasks. Invoke BEFORE starting implementation.

81

Quality

77%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./examples/saas-tracker/template/.agents/skills/databricks-jobs/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a solid skill description that clearly identifies its niche (Lakeflow Jobs on Databricks), provides explicit trigger guidance with a 'Use when' clause, and includes distinctive terminology that minimizes conflict risk. The main weakness is that the specificity of concrete actions could be expanded—it mentions 'develop and deploy' but doesn't enumerate specific capabilities like configuring schedules, setting cluster policies, or managing task dependencies.

Suggestions

Expand the concrete actions beyond 'develop and deploy' to include specifics like 'configure task dependencies, set schedules, define cluster policies, manage job parameters' to improve specificity.

DimensionReasoningScore

Specificity

Names the domain (Lakeflow Jobs on Databricks) and some actions (develop, deploy, creating jobs with notebooks, Python wheels, SQL tasks), but doesn't list multiple concrete actions like configuring schedules, setting dependencies, monitoring runs, etc.

2 / 3

Completeness

Clearly answers both 'what' (develop and deploy Lakeflow Jobs on Databricks) and 'when' (when creating data engineering jobs with notebooks, Python wheels, or SQL tasks), with an explicit 'Use when' clause and an additional timing directive ('BEFORE starting implementation').

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'Lakeflow Jobs', 'Databricks', 'data engineering jobs', 'notebooks', 'Python wheels', 'SQL tasks'. These cover the main terms a user working in this domain would naturally use.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with specific triggers like 'Lakeflow Jobs', 'Databricks', and the combination of 'Python wheels' and 'SQL tasks' in a job context. Unlikely to conflict with generic coding or data skills.

3 / 3

Total

11

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a solid, actionable skill with concrete YAML and CLI examples that cover the full lifecycle of Lakeflow Jobs development. Its main weaknesses are the lack of error-handling/validation feedback loops in the workflow, some verbosity in sections that explain concepts Claude already knows (basic Spark operations, project structure), and a monolithic structure that could benefit from splitting advanced topics into separate files.

Suggestions

Add explicit error recovery steps to the Development Workflow section (e.g., 'If validate fails, check for YAML syntax errors and missing variable references; fix and re-validate before deploying').

Remove or significantly trim the notebook code section — Claude already knows basic Spark read/write/SQL patterns; instead, focus only on Databricks-specific patterns like dbutils.widgets.get().

Consider splitting scheduling, multi-task dependencies, and task type details into a separate reference file to improve progressive disclosure and reduce the main skill's token footprint.

DimensionReasoningScore

Conciseness

Generally efficient but includes some unnecessary content like the full CLAUDE.md/AGENTS.md template (which is boilerplate), and the project structure diagram is somewhat redundant given the YAML examples that follow. The notebook code section explains basic Spark operations Claude already knows.

2 / 3

Actionability

Provides fully executable YAML configurations, bash commands, and Python code. The scaffolding command is copy-paste ready with all flags explained, task configurations are complete and valid, and the development workflow has specific CLI commands.

3 / 3

Workflow Clarity

The Development Workflow section lists validate → deploy → run → check steps, which is good. However, there are no explicit validation checkpoints or feedback loops for error recovery (e.g., what to do if validation fails, how to interpret run status errors, no retry guidance). For a deployment workflow involving potentially destructive operations, this caps at 2.

2 / 3

Progressive Disclosure

The skill references a parent 'databricks-core' skill and links to external documentation, which is good. However, the content is fairly long and monolithic — sections like scheduling options, multi-task dependencies, and notebook code patterns could be split into referenced files. No bundle files are provided to support progressive disclosure.

2 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
databricks/devhub
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.