CtrlK
BlogDocsLog inGet started
Tessl Logo

databricks-local-dev-loop

Configure Databricks local development with Databricks Connect, Asset Bundles, and IDE. Use when setting up a local dev environment, configuring test workflows, or establishing a fast iteration cycle with Databricks. Trigger with phrases like "databricks dev setup", "databricks local", "databricks IDE", "develop with databricks", "databricks connect".

80

Quality

77%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/saas-packs/databricks-pack/skills/databricks-local-dev-loop/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a solid skill description with excellent trigger terms and completeness. Its main weakness is that the specificity of capabilities could be improved by listing more concrete actions beyond 'configure' and 'setting up'. The description effectively carves out a distinct niche and provides clear guidance on when to use it.

Suggestions

Add more specific concrete actions, e.g., 'Install and configure Databricks Connect, set up authentication profiles, create Asset Bundle project structure, configure IDE extensions and cluster connections.'

DimensionReasoningScore

Specificity

Names the domain (Databricks local development) and some specific tools (Databricks Connect, Asset Bundles, IDE), but doesn't list concrete actions beyond 'configure' and 'setting up'. It lacks detail on what specific tasks are performed (e.g., install dependencies, configure authentication, set up cluster connections).

2 / 3

Completeness

Clearly answers both 'what' (configure Databricks local development with Connect, Asset Bundles, and IDE) and 'when' (setting up local dev environment, configuring test workflows, establishing fast iteration cycle) with explicit trigger phrases listed.

3 / 3

Trigger Term Quality

Includes a strong set of natural trigger terms that users would actually say: 'databricks dev setup', 'databricks local', 'databricks IDE', 'develop with databricks', 'databricks connect'. Also includes relevant terms like 'local dev environment', 'test workflows', and 'fast iteration cycle'.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with a clear niche around Databricks local development setup specifically. The combination of 'Databricks Connect', 'Asset Bundles', and IDE configuration creates a very specific scope unlikely to conflict with other skills.

3 / 3

Total

11

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a solid, highly actionable skill with excellent concrete examples covering the full Databricks local dev setup workflow. Its main weaknesses are length/verbosity—the inline YAML configs, test fixtures, and VS Code settings could be offloaded to bundle files—and the lack of explicit validation gates between critical setup steps. The error handling table is a nice touch but would be more effective integrated into the workflow as checkpoints.

Suggestions

Add explicit validation checkpoints after critical steps (e.g., 'Verify Connect works before proceeding to Step 4' with a gate condition), and frame 'databricks-connect test' as a required pass/fail gate rather than just a command.

Move the detailed YAML configurations (databricks.yml, daily_etl.yml) and VS Code settings into bundle reference files, keeping only a minimal example inline with pointers to the full configs.

Trim the project structure tree to essential files only—Claude can infer standard Python project conventions like __init__.py files and common directory names.

DimensionReasoningScore

Conciseness

The skill is mostly efficient with good code examples, but includes some unnecessary explanation (e.g., 'Databricks Connect lets you run PySpark code locally while executing on a remote Databricks cluster, giving you IDE debugging, fast iteration, and proper test isolation' and inline comments like '# Raw ingestion' in the project tree). The project structure section is quite detailed for something Claude could infer, and some comments are redundant.

2 / 3

Actionability

Excellent actionability with fully executable code blocks throughout: bash install commands, Python fixtures, YAML configurations, VS Code settings, and test examples. Every step has copy-paste ready code with specific version numbers and concrete commands like 'databricks bundle validate' and 'pytest tests/unit/ -v'.

3 / 3

Workflow Clarity

Steps are clearly numbered and sequenced (Steps 1-7), and the dev workflow commands in Step 6 include validation ('databricks bundle validate'). However, there's no explicit validation checkpoint after critical steps like installation (Step 2's 'databricks-connect test' is included but not framed as a gate), and no feedback loop for error recovery during the multi-step setup process. The error handling table is helpful but separate from the workflow.

2 / 3

Progressive Disclosure

The content is well-structured with clear sections and headers, and references external resources and a next-steps skill ('databricks-sdk-patterns'). However, the skill is quite long (~200+ lines) with substantial inline content (full YAML configs, multiple Python files, VS Code settings) that could be split into referenced files. No bundle files are provided to offload this content.

2 / 3

Total

9

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.