CtrlK
BlogDocsLog inGet started
Tessl Logo

databricks-ai-functions

Use Databricks built-in AI Functions (ai_classify, ai_extract, ai_summarize, ai_mask, ai_translate, ai_fix_grammar, ai_gen, ai_analyze_sentiment, ai_similarity, ai_parse_document, ai_query, ai_forecast) to add AI capabilities directly to SQL and PySpark pipelines without managing model endpoints. Also covers document parsing and building custom RAG pipelines (parse → chunk → index → query).

82

Quality

77%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./databricks-skills/databricks-ai-functions/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

82%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong description with excellent specificity, listing all 12 Databricks AI functions by name and covering concrete use cases including RAG pipeline construction. The main weakness is the absence of an explicit 'Use when...' clause, which means Claude must infer when to select this skill rather than having clear trigger guidance. The Databricks-specific context makes it highly distinctive.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about Databricks AI Functions, adding AI capabilities to SQL or PySpark queries, or building RAG pipelines in Databricks.'

DimensionReasoningScore

Specificity

The description lists 12 specific AI functions by name (ai_classify, ai_extract, ai_summarize, etc.) and describes concrete use cases like adding AI to SQL/PySpark pipelines, document parsing, and building custom RAG pipelines with a clear workflow (parse → chunk → index → query).

3 / 3

Completeness

The 'what' is thoroughly covered with specific functions and use cases, but there is no explicit 'Use when...' clause or equivalent trigger guidance telling Claude when to select this skill. The when is only implied by the capabilities listed.

2 / 3

Trigger Term Quality

Includes highly natural trigger terms users would say: 'Databricks', 'AI Functions', specific function names like 'ai_classify' and 'ai_summarize', 'SQL', 'PySpark', 'RAG pipelines', 'document parsing', and 'model endpoints'. These cover a wide range of natural user queries about Databricks AI capabilities.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive due to the specific Databricks platform context, named AI functions, and the SQL/PySpark pipeline focus. This is unlikely to conflict with generic AI, SQL, or document processing skills because of the clear Databricks niche.

3 / 3

Total

11

/

12

Passed

Implementation

72%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured skill that excels in actionability with executable code examples across multiple patterns and in progressive disclosure with clear references to detailed sub-files. The main weaknesses are slight verbosity in the overview section and missing explicit validation/error-recovery steps in the multi-step workflows, particularly for batch document processing and ai_query error handling.

Suggestions

Add explicit validation checkpoints to Pattern 3 (document ingestion) and Pattern 5 (ai_query), such as checking row counts after filtering, logging parse errors, and handling the ai_query error struct.

Trim the overview paragraph to remove phrases Claude doesn't need (e.g., 'no model endpoint setup, no API keys, no boilerplate') and the analogy to UPPER()/LENGTH().

DimensionReasoningScore

Conciseness

The overview section explaining what AI Functions are and the three categories is somewhat verbose for Claude (e.g., 'no model endpoint setup, no API keys, no boilerplate' and 'as naturally as UPPER() or LENGTH()'). However, the function selection tables and decision logic add genuine value. The content could be tightened but isn't egregiously padded.

2 / 3

Actionability

Every pattern includes fully executable SQL and PySpark code that is copy-paste ready. The quick start, six common patterns, and troubleshooting table all provide concrete, specific guidance with real function calls, parameters, and output handling.

3 / 3

Workflow Clarity

The patterns show clear sequences (e.g., parse → filter → enrich in Pattern 3), but there are no explicit validation checkpoints or error recovery feedback loops. Pattern 3 filters on parse_error but doesn't guide what to do when errors occur. Pattern 5 uses failOnError but doesn't show how to handle the error struct. For batch/destructive operations this caps at 2.

2 / 3

Progressive Disclosure

Excellent structure: the SKILL.md provides a concise overview, function selection guidance, quick start, and common patterns, then clearly signals four one-level-deep reference files with descriptive summaries of what each contains. Navigation is easy and references are well-organized.

3 / 3

Total

10

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
databricks-solutions/ai-dev-kit
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.