Create a minimal working Databricks example with cluster and notebook. Use when starting a new Databricks project, testing your setup, or learning basic Databricks patterns. Trigger with phrases like "databricks hello world", "databricks example", "databricks quick start", "first databricks notebook", "create cluster".
80
77%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/saas-packs/databricks-pack/skills/databricks-hello-world/SKILL.mdQuality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-structured skill description with strong trigger terms and clear completeness. Its main weakness is that the 'what' portion could be more specific about the concrete actions performed (e.g., configuring cluster settings, writing sample PySpark code, connecting to a workspace). The explicit trigger phrases and use-when clause make it highly functional for skill selection.
Suggestions
Expand the capability description with more specific actions, e.g., 'Creates a Databricks cluster configuration, writes a sample PySpark notebook, and sets up workspace connectivity' to improve specificity.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Databricks) and some actions ('create a minimal working example with cluster and notebook'), but doesn't list multiple specific concrete actions beyond creating a cluster and notebook. It's more of a high-level summary than a detailed capability list. | 2 / 3 |
Completeness | Clearly answers both 'what' (create a minimal working Databricks example with cluster and notebook) and 'when' (starting a new Databricks project, testing setup, learning basic patterns), with explicit trigger phrases provided. | 3 / 3 |
Trigger Term Quality | Includes excellent natural trigger terms that users would actually say: 'databricks hello world', 'databricks example', 'databricks quick start', 'first databricks notebook', 'create cluster'. These cover common variations of how users would phrase this request. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with a clear niche focused on Databricks hello-world/quickstart scenarios. The specific trigger terms like 'databricks hello world' and 'first databricks notebook' are unlikely to conflict with other skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid, highly actionable Databricks hello-world skill with executable code for every step and good error handling coverage. Its main weaknesses are the lack of explicit validation checkpoints between steps (e.g., verifying cluster is running before uploading notebook) and the monolithic length that could benefit from splitting auxiliary content into referenced files. The hardcoded cluster_id in Step 3 instead of programmatic chaining from Step 1 is a notable gap.
Suggestions
Add explicit validation checkpoints between steps, e.g., verify cluster state is RUNNING before proceeding to notebook upload, and verify notebook exists before submitting the job run.
Chain the cluster_id programmatically from Step 1 into Step 3 instead of hardcoding a placeholder value, to make the workflow truly end-to-end executable.
Consider moving the error handling table, node type discovery examples, and SQL warehouse section into a separate reference file to reduce the main skill's length.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is mostly efficient with executable code examples, but it's quite long for a 'hello world' skill. The inline comment `# POST /api/2.0/clusters/create` and some explanatory comments like `# Serverless warehouses start in seconds and cost ~$0.07/DBU` add minor bloat. Providing both CLI and SDK approaches for cluster creation is useful but adds length. | 2 / 3 |
Actionability | Every step includes fully executable, copy-paste ready code with both CLI and Python SDK examples. Commands include expected outputs, specific parameter values, and concrete notebook source code. The error handling table provides specific error codes with actionable solutions. | 3 / 3 |
Workflow Clarity | Steps are clearly sequenced (create cluster → upload notebook → run job → create warehouse → verify), but there are no explicit validation checkpoints between steps. Step 3 hardcodes a placeholder cluster_id rather than showing how to chain from Step 1's output. There's no feedback loop for cluster creation failures or notebook upload verification before running the job. | 2 / 3 |
Progressive Disclosure | The content is well-structured with clear sections, but it's a long monolithic file (~150+ lines of code) that could benefit from splitting. The error handling table, examples section, and SQL warehouse step could be separate reference files. The prerequisite reference to `databricks-install-auth` and next step to `databricks-local-dev-loop` show good cross-referencing. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
70e9fa4
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.