Databricks SQL (DBSQL) advanced features and SQL warehouse capabilities. This skill MUST be invoked when the user mentions: "DBSQL", "Databricks SQL", "SQL warehouse", "SQL scripting", "stored procedure", "CALL procedure", "materialized view", "CREATE MATERIALIZED VIEW", "pipe syntax", "|>", "geospatial", "H3", "ST_", "spatial SQL", "collation", "COLLATE", "ai_query", "ai_classify", "ai_extract", "ai_gen", "AI function", "http_request", "remote_query", "read_files", "Lakehouse Federation", "recursive CTE", "WITH RECURSIVE", "multi-statement transaction", "temp table", "temporary view", "pipe operator". SHOULD also invoke when the user asks about SQL best practices, data modeling patterns, or advanced SQL features on Databricks.
90
88%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Quality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description excels at trigger term coverage and completeness, providing an exhaustive list of when the skill should be invoked. Its main weakness is that the 'what does this do' portion is vague—it says 'advanced features and SQL warehouse capabilities' without listing concrete actions the skill performs (e.g., 'writes stored procedures, creates materialized views, builds geospatial queries'). The description is heavily weighted toward selection triggers rather than capability description.
Suggestions
Replace the vague opening 'advanced features and SQL warehouse capabilities' with specific concrete actions like 'Writes stored procedures, creates materialized views, builds geospatial queries, configures Lakehouse Federation, and uses AI SQL functions on Databricks.'
Consider condensing the trigger term list slightly by grouping related terms (e.g., 'AI functions (ai_query, ai_classify, ai_extract, ai_gen)') to improve readability while retaining coverage.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description names the domain (Databricks SQL) and mentions 'advanced features and SQL warehouse capabilities' but doesn't list concrete actions like 'create materialized views, execute stored procedures, run geospatial queries.' The trigger terms list specific features but the opening statement is vague about what actions the skill performs. | 2 / 3 |
Completeness | Clearly answers both 'what' (Databricks SQL advanced features and SQL warehouse capabilities) and 'when' with an extensive explicit 'MUST be invoked when' clause and a supplementary 'SHOULD also invoke when' clause. The when-triggers are exceptionally detailed. | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms users would actually say, including abbreviations ('DBSQL'), full names ('Databricks SQL'), specific syntax ('|>', 'ST_', 'WITH RECURSIVE'), function names ('ai_query', 'ai_classify'), and conceptual terms ('materialized view', 'Lakehouse Federation'). Very comprehensive keyword coverage. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with Databricks-specific terminology ('DBSQL', 'SQL warehouse', 'Lakehouse Federation', 'ai_query') that clearly separates it from generic SQL skills. The specific function names and Databricks-branded terms make accidental conflicts with other skills very unlikely. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
87%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a high-quality skill file that excels at conciseness, actionability, and progressive disclosure. The quick reference table, executable examples, and clear navigation to sub-files make it very effective. The main weakness is the lack of explicit validation steps and feedback loops in multi-step workflows, though the Key Guidelines section partially compensates by mentioning MCP tools for testing.
Suggestions
Add explicit validation checkpoints to multi-step workflows (e.g., after creating a stored procedure, show how to verify it exists and test it with a sample CALL before deploying)
Integrate the MCP tool validation guidance from Key Guidelines directly into the workflow examples (e.g., 'Run `execute_sql` to verify the materialized view was created successfully before scheduling refreshes')
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is highly efficient — no unnecessary explanations of what SQL is, what Databricks is, or how basic concepts work. Every section jumps straight to syntax and executable examples. The quick reference table is an excellent use of space. | 3 / 3 |
Actionability | Every feature is demonstrated with complete, copy-paste-ready SQL examples including realistic table references (catalog.schema.table), proper syntax, and invocation patterns. Examples cover error handling, parameters, and real-world use cases like upserts, hierarchy traversal, and API calls. | 3 / 3 |
Workflow Clarity | While individual examples are clear and well-structured, there are no explicit validation checkpoints or feedback loops. For operations like stored procedures with error handling or materialized view creation, there's no 'verify then proceed' pattern. The Key Guidelines mention using MCP tools to test SQL but this isn't woven into the workflows themselves. | 2 / 3 |
Progressive Disclosure | Excellent progressive disclosure structure: a quick reference table links to detailed files, common patterns provide enough to get started, and the Reference Files table clearly signals when to read each sub-file with specific trigger conditions. All references are one level deep and well-signaled. | 3 / 3 |
Total | 11 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
b4071a0
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.