CtrlK
BlogDocsLog inGet started
Tessl Logo

databricks-jobs

Use this skill proactively for ANY Databricks Jobs task - creating, listing, running, updating, or deleting jobs. Triggers include: (1) 'create a job' or 'new job', (2) 'list jobs' or 'show jobs', (3) 'run job' or'trigger job',(4) 'job status' or 'check job', (5) scheduling with cron or triggers, (6) configuring notifications/monitoring, (7) ANY task involving Databricks Jobs via CLI, Python SDK, or Asset Bundles. ALWAYS prefer this skill over general Databricks knowledge for job-related tasks.

89

Quality

86%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description that clearly defines its scope (Databricks Jobs management), lists concrete actions, and provides extensive explicit trigger terms. The explicit instruction to prefer this skill over general Databricks knowledge helps with disambiguation. However, it uses second-person imperative voice ('Use this skill') rather than third-person descriptive voice, and reads more like an instruction to Claude than a neutral description of capabilities.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: creating, listing, running, updating, deleting jobs, scheduling with cron/triggers, configuring notifications/monitoring, and specifies tools (CLI, Python SDK, Asset Bundles).

3 / 3

Completeness

Clearly answers both 'what' (creating, listing, running, updating, deleting Databricks Jobs, scheduling, notifications) and 'when' with explicit numbered trigger conditions and a clear 'Use this skill proactively for ANY Databricks Jobs task' directive.

3 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms users would say: 'create a job', 'new job', 'list jobs', 'show jobs', 'run job', 'trigger job', 'job status', 'check job', plus domain terms like cron, notifications, Databricks Jobs.

3 / 3

Distinctiveness Conflict Risk

Clearly scoped to Databricks Jobs specifically, with explicit differentiation from 'general Databricks knowledge' and distinct triggers tied to job-related operations. Unlikely to conflict with other skills.

3 / 3

Total

12

/

12

Passed

Implementation

72%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured skill with excellent progressive disclosure and strong actionability across multiple interfaces. The main weaknesses are minor verbosity (summary tables that partially duplicate referenced content, an unnecessary overview sentence) and the lack of explicit validation/error-recovery steps in workflows. Overall, it serves as an effective reference for Databricks Jobs management.

Suggestions

Add a validation checkpoint to the Asset Bundle workflow (e.g., 'If validate fails, fix YAML errors before deploying') and consider adding error-recovery guidance for SDK job creation failures.

Remove or shorten the opening overview sentence ('Databricks Jobs orchestrate data workflows...') since Claude already knows this, and consider whether the full task-type and trigger-type summary tables are needed given they link to dedicated reference files.

DimensionReasoningScore

Conciseness

The skill is mostly efficient but includes some content that could be tightened. The summary tables for task types and trigger types duplicate information that's already in the referenced files. The overview sentence explaining what Databricks Jobs are is unnecessary for Claude. However, most content is practical and reference-oriented.

2 / 3

Actionability

Provides fully executable code across three interfaces (Python SDK, CLI, Asset Bundles) with copy-paste ready examples. Common operations are concrete with specific commands and parameters, and the quick start examples are complete and runnable.

3 / 3

Workflow Clarity

Multi-task DAG dependencies are clearly explained with run_if conditions, and Asset Bundle operations show a logical sequence (validate → deploy → run). However, there are no explicit validation checkpoints or feedback loops for error recovery when creating/updating jobs, and the bundle workflow lacks a 'verify deployment succeeded' step before running.

2 / 3

Progressive Disclosure

Excellent progressive disclosure with a clear reference table at the top pointing to one-level-deep files (task-types.md, triggers-schedules.md, notifications-monitoring.md, examples.md). Summary tables with anchor links to specific sections in reference files enable easy navigation. The main file serves as a well-organized overview without being monolithic.

3 / 3

Total

10

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
databricks-solutions/ai-dev-kit
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.