Migrate Databricks workloads from classic compute to serverless compute. Scans code for serverless compatibility issues, provides concrete fixes for the serverless Spark Connect architecture, and guides the full migration to serverless environments. Use for classic-to-serverless migrations, serverless code compatibility checks, or writing new serverless-compatible notebooks and jobs. Not for classic DBR version upgrades or cluster configuration changes within classic compute.
89
88%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Passed
No known issues
Quality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an excellent skill description that clearly defines its scope, provides specific actions, includes natural trigger terms, and explicitly states both when to use it and when not to use it. The inclusion of exclusion criteria is a notable strength that helps disambiguate from potentially similar Databricks-related skills. The description is concise yet comprehensive.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: scanning code for compatibility issues, providing concrete fixes for Spark Connect architecture, guiding full migration to serverless environments. Also specifies what it does NOT do (classic DBR upgrades, cluster config changes). | 3 / 3 |
Completeness | Clearly answers both 'what' (scans code, provides fixes, guides migration) and 'when' ('Use for classic-to-serverless migrations, serverless code compatibility checks, or writing new serverless-compatible notebooks and jobs'). Also includes explicit exclusions for additional clarity. | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural terms users would say: 'Databricks', 'classic compute', 'serverless compute', 'serverless compatibility', 'Spark Connect', 'migration', 'serverless-compatible notebooks and jobs'. These are terms a user working on this specific task would naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with a very specific niche: Databricks classic-to-serverless migration. The explicit exclusions ('Not for classic DBR version upgrades or cluster configuration changes within classic compute') further reduce conflict risk with related Databricks skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a high-quality, comprehensive migration skill with exceptional actionability — nearly every incompatible pattern has a concrete, executable fix. The workflow is well-structured with clear phases, validation checkpoints, and stopping conditions. The main weakness is length: the skill tries to be both an overview and a detailed reference, which makes it quite long despite the existence of reference files that could absorb more of the inline detail.
Suggestions
Move the detailed pattern tables (Categories A-G) and extensive code examples to the referenced files (e.g., compatibility-checks.md, code-patterns.md) and keep only the most critical 5-10 patterns inline, reducing the main SKILL.md to a true overview with quick-reference essentials.
Provide the bundle reference files so the progressive disclosure structure can be fully realized — currently the five referenced guides don't exist, which means Claude would hit dead links.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is very long (~600+ lines) and contains extensive tables and code examples that are valuable but could be tightened. Some sections like the failure reporting protocol and migration deliverables table add significant length. However, most content is domain-specific knowledge Claude wouldn't have (serverless-specific patterns, exact error messages, environment version mappings), so the verbosity is largely justified. It loses a point for some redundancy between the decision tree, the analysis tables, and the quick fixes section. | 2 / 3 |
Actionability | Excellent actionability throughout. Every incompatible pattern has a concrete before/after code fix. The streaming fixes, RDD replacements, DBFS path migrations, and job config transformations are all copy-paste ready with executable Python code. The environment spec JSON, A/B comparison code, and catalog parameterization pattern are all immediately usable. | 3 / 3 |
Workflow Clarity | The 4-step migration lifecycle (Ingest → Analyze → Test → Validate) is clearly sequenced with explicit validation checkpoints. The two-branch testing strategy includes a decision tree for what goes to production vs. test-only. The A/B comparison step provides explicit validation code. Stopping conditions are clearly defined. The failure reporting protocol provides a feedback loop for irrecoverable failures. | 3 / 3 |
Progressive Disclosure | The skill references five detailed reference guides (compatibility-checks.md, streaming-migration.md, networking-and-security.md, code-patterns.md, configuration-guide.md) with clear navigation links, which is good structure. However, no bundle files were provided, so these references cannot be verified. The main SKILL.md itself is quite long and includes substantial inline detail (full pattern tables, extensive code examples) that could arguably be pushed to the reference files, making the overview leaner. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (648 lines); consider splitting into references/ and linking | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
f1c9cf7
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.