Create a minimal working Databricks example with cluster and notebook. Use when starting a new Databricks project, testing your setup, or learning basic Databricks patterns. Trigger with phrases like "databricks hello world", "databricks example", "databricks quick start", "first databricks notebook", "create cluster".
80
77%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/saas-packs/databricks-pack/skills/databricks-hello-world/SKILL.mdQuality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-structured skill description with strong trigger terms and clear 'what/when' guidance. Its main weakness is that the capability description could be more specific about what concrete actions are performed beyond creating a cluster and notebook (e.g., configuring the cluster, writing sample code, running a job). Overall it's effective for skill selection.
Suggestions
Add more specific concrete actions to the capability description, e.g., 'Creates a Databricks cluster configuration, sets up a sample notebook with PySpark code, and runs a basic job' to improve specificity.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Databricks) and some actions ('Create a minimal working example with cluster and notebook'), but doesn't list multiple specific concrete actions beyond creating a cluster and notebook. It's more of a high-level summary than a detailed capability list. | 2 / 3 |
Completeness | Clearly answers both 'what' (create a minimal working Databricks example with cluster and notebook) and 'when' (starting a new Databricks project, testing setup, learning basic patterns), with explicit trigger phrases provided. | 3 / 3 |
Trigger Term Quality | Includes excellent natural trigger terms that users would actually say: 'databricks hello world', 'databricks example', 'databricks quick start', 'first databricks notebook', 'create cluster'. These cover common variations of how users would phrase this request. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with a clear niche — Databricks hello world / quick start examples. The specific trigger terms like 'databricks hello world' and 'first databricks notebook' are unlikely to conflict with other skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a highly actionable Databricks hello world skill with excellent executable code examples covering both CLI and SDK approaches. Its main weaknesses are length (it covers more than a minimal hello world needs, including SQL warehouses and Delta Lake) and missing explicit validation checkpoints between steps. The hardcoded cluster_id in Step 3 that doesn't programmatically connect to Step 1's output is a workflow gap.
Suggestions
Add explicit validation checkpoints between steps, e.g., 'Verify cluster state is RUNNING before proceeding' with a concrete check command or code snippet.
Pass the cluster_id from Step 1 programmatically into Step 3 instead of using a hardcoded placeholder, to avoid user confusion and errors.
Consider trimming Step 4 (SQL warehouse) and the Examples section into a separate referenced file to reduce the main skill's token footprint for a 'hello world' use case.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is mostly efficient with executable code examples, but it's quite long for a 'hello world' skill. Some sections like the SQL warehouse (Step 4) and the Examples section add scope beyond what a minimal hello world needs. The error handling table and resources are useful but add bulk. | 2 / 3 |
Actionability | Excellent actionability — every step has fully executable, copy-paste ready code with both CLI and Python SDK options. Commands include expected outputs, specific parameter values, and real API endpoints. The error handling table provides concrete solutions for common failures. | 3 / 3 |
Workflow Clarity | Steps are clearly sequenced and numbered, but validation checkpoints are mostly implicit. Step 3 checks run state and Step 5 provides CLI verification, but there's no explicit 'verify cluster is RUNNING before proceeding to Step 2' checkpoint, and no feedback loop for handling cluster creation failures before moving on. The hardcoded cluster_id in Step 3 could cause confusion. | 2 / 3 |
Progressive Disclosure | The content is well-structured with clear sections, but it's monolithic — all content is inline in a single file with no bundle files to offload detail. The error handling table, examples section, and SQL warehouse step could be separated into referenced files. References to external docs and next steps are good, but the skill itself is dense. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3a2d27d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.