CtrlK
BlogDocsLog inGet started
Tessl Logo

databricks-sdk-patterns

Apply production-ready Databricks SDK patterns for Python and REST API. Use when implementing Databricks integrations, refactoring SDK usage, or establishing team coding standards for Databricks. Trigger with phrases like "databricks SDK patterns", "databricks best practices", "databricks code patterns", "idiomatic databricks".

80

Quality

77%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/saas-packs/databricks-pack/skills/databricks-sdk-patterns/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a well-structured description with explicit 'Use when' and 'Trigger with' clauses, strong distinctiveness due to the Databricks niche, and good trigger term coverage. Its main weakness is that the core capabilities are described at a pattern/practice level rather than listing specific concrete actions (e.g., authentication setup, job submission, cluster management), which makes the 'what it does' slightly abstract.

Suggestions

Add 2-3 specific concrete actions to improve specificity, e.g., 'configure authentication, manage workspace clients, handle pagination, submit jobs' rather than the abstract 'apply production-ready patterns'.

DimensionReasoningScore

Specificity

Names the domain (Databricks SDK, Python, REST API) and mentions some actions ('implementing integrations', 'refactoring SDK usage', 'establishing team coding standards'), but the core capability 'Apply production-ready patterns' is somewhat vague—it doesn't list specific concrete actions like 'configure authentication', 'manage clusters', or 'submit jobs'.

2 / 3

Completeness

Clearly answers both 'what' (apply production-ready Databricks SDK patterns for Python and REST API) and 'when' (implementing integrations, refactoring SDK usage, establishing coding standards) with explicit trigger phrases listed.

3 / 3

Trigger Term Quality

Includes strong natural trigger terms: 'databricks SDK patterns', 'databricks best practices', 'databricks code patterns', 'idiomatic databricks', plus mentions of 'Databricks integrations' and 'REST API'. These cover a good range of phrases a user would naturally say when seeking this kind of help.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive—'Databricks SDK' is a very specific niche. The trigger terms are all Databricks-specific and unlikely to conflict with general Python, API, or other cloud platform skills.

3 / 3

Total

11

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a strong, actionable skill with production-ready, executable Databricks SDK patterns covering key use cases. Its main weaknesses are moderate verbosity (the error table duplicates code information, some explanatory text is unnecessary) and the lack of explicit validation checkpoints between steps. The content would benefit from splitting longer patterns into referenced files to improve progressive disclosure.

Suggestions

Remove the Error Handling table or the detailed exception handling in the code—having both is redundant and wastes tokens.

Add a validation step after Step 1 (e.g., 'Verify connection: `w.current_user.me()`') to establish a checkpoint before proceeding to more complex patterns.

Consider moving Steps 3-4 (cluster lifecycle and job builder) into a separate referenced file to reduce the main skill's length and improve progressive disclosure.

DimensionReasoningScore

Conciseness

The skill is mostly efficient with real executable code, but includes some unnecessary commentary (e.g., 'Each WorkspaceClient holds an HTTP session and re-authenticates') and the Overview section restates what the content already shows. The error handling table partially duplicates information already in the code. Some tightening is possible.

2 / 3

Actionability

Every pattern includes fully executable, copy-paste-ready Python code with real SDK imports, typed dataclasses, and concrete usage examples. The code uses actual SDK exception classes, real API shapes, and includes usage snippets after each pattern.

3 / 3

Workflow Clarity

Steps are clearly numbered and sequenced, but they are independent patterns rather than a connected workflow. The cluster lifecycle context manager includes cleanup validation, but there are no explicit validation checkpoints between steps (e.g., verifying the client works before proceeding to error handling patterns). For the managed_cluster pattern involving destructive operations (cluster creation/deletion), the feedback loop is minimal.

2 / 3

Progressive Disclosure

The content is well-structured with clear sections and a reference to external resources and a next-steps pointer. However, at ~200 lines of inline code, some patterns (like the full job builder or error handling) could be split into referenced files. The skill is somewhat monolithic for its length.

2 / 3

Total

9

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.