Implement Databricks reference architecture with best-practice project layout. Use when designing new Databricks projects, reviewing architecture, or establishing standards for Databricks applications. Trigger with phrases like "databricks architecture", "databricks best practices", "databricks project structure", "how to organize databricks", "databricks layout".
80
77%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/saas-packs/databricks-pack/skills/databricks-reference-architecture/SKILL.mdQuality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a solid description with excellent trigger terms and completeness, clearly specifying both when and what. Its main weakness is that the 'what' could be more specific about the concrete actions or artifacts produced (e.g., folder structures, module templates, configuration patterns). Overall it would perform well in skill selection among a large set of skills.
Suggestions
Add more specific concrete actions to the 'what' portion, e.g., 'Generates folder structures, notebook organization, CI/CD pipeline configs, and module templates for Databricks projects.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | It names the domain (Databricks) and mentions 'reference architecture' and 'best-practice project layout', but doesn't list multiple concrete actions like specific deliverables (e.g., folder structures, config files, CI/CD pipelines, notebook organization). | 2 / 3 |
Completeness | Clearly answers both 'what' (implement Databricks reference architecture with best-practice project layout) and 'when' (designing new projects, reviewing architecture, establishing standards) with explicit trigger phrases. | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms: 'databricks architecture', 'databricks best practices', 'databricks project structure', 'how to organize databricks', 'databricks layout' — these are phrases users would naturally say. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive — the Databricks-specific focus with architecture/layout triggers creates a clear niche that is unlikely to conflict with other skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid, highly actionable reference architecture skill with excellent executable code examples covering the full Databricks stack. Its main weaknesses are the lack of validation checkpoints between workflow steps (important for infrastructure setup) and the monolithic structure that inlines all content rather than leveraging progressive disclosure. Some token savings could be achieved by trimming the ASCII diagram and prerequisites.
Suggestions
Add explicit validation checkpoints between steps, e.g., 'Verify catalog creation: SHOW SCHEMAS IN prod_catalog' after Step 1, and 'databricks bundle validate' after Step 2.
Split detailed code examples (pipeline code, maintenance scripts, job YAML) into referenced bundle files and keep only concise summaries in SKILL.md.
Remove or condense the prerequisites section—Claude already understands medallion architecture and CLI tooling concepts.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is fairly comprehensive but includes some unnecessary verbosity—the ASCII architecture diagram, while visually appealing, is large and the information could be conveyed more compactly. The prerequisites section explains things Claude already knows. However, most content is substantive code/config examples that earn their place. | 2 / 3 |
Actionability | Excellent actionability with fully executable SQL, YAML, and Python code throughout. The Unity Catalog setup, Asset Bundle configuration, medallion pipeline code, and maintenance scripts are all copy-paste ready with specific values and realistic patterns. | 3 / 3 |
Workflow Clarity | Steps are clearly numbered and sequenced (Steps 1-5), but there are no explicit validation checkpoints between steps. For a multi-step architecture setup involving catalog creation, permissions, and pipeline deployment, there should be verification steps (e.g., confirm catalog exists, validate bundle config before deploying, test pipeline output). The error handling table is helpful but reactive rather than integrated into the workflow. | 2 / 3 |
Progressive Disclosure | The content is a monolithic document with all details inline—the full pipeline code, maintenance scripts, job configs, and SQL are all embedded rather than split into referenced files. The project structure suggests separate files exist (e.g., src/ingestion/bronze_raw_events.py) but the skill inlines their content rather than referencing them. External links to Databricks docs are provided but internal progressive disclosure is lacking. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3a2d27d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.