CtrlK
BlogDocsLog inGet started
Tessl Logo

databricks-sdk-patterns

Apply production-ready Databricks SDK patterns for Python and REST API. Use when implementing Databricks integrations, refactoring SDK usage, or establishing team coding standards for Databricks. Trigger with phrases like "databricks SDK patterns", "databricks best practices", "databricks code patterns", "idiomatic databricks".

80

Quality

77%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/saas-packs/databricks-pack/skills/databricks-sdk-patterns/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a well-structured description with strong trigger terms and clear 'when to use' guidance. Its main weakness is that the capabilities described are somewhat high-level—it says 'apply production-ready patterns' but doesn't enumerate what specific patterns or actions are covered (e.g., authentication, job management, cluster configuration). Overall it's a solid description that would perform well in skill selection.

Suggestions

Add 2-3 more specific concrete actions to improve specificity, e.g., 'Covers authentication, workspace client setup, job submission, cluster management, and error handling patterns.'

DimensionReasoningScore

Specificity

It names the domain (Databricks SDK, Python, REST API) and mentions some actions ('implementing integrations', 'refactoring SDK usage', 'establishing team coding standards'), but these are fairly high-level and don't list concrete specific actions like 'configure clusters', 'manage jobs', 'authenticate with tokens', etc.

2 / 3

Completeness

Clearly answers both 'what' (apply production-ready Databricks SDK patterns for Python and REST API) and 'when' (implementing integrations, refactoring SDK usage, establishing coding standards) with explicit trigger phrases provided.

3 / 3

Trigger Term Quality

Includes explicit trigger phrases like 'databricks SDK patterns', 'databricks best practices', 'databricks code patterns', 'idiomatic databricks', plus natural terms like 'Databricks integrations' and 'REST API'. These are terms users would naturally use when seeking this kind of help.

3 / 3

Distinctiveness Conflict Risk

The skill is clearly scoped to Databricks SDK patterns specifically, with distinct trigger terms that are unlikely to conflict with general Python, general API, or other cloud platform skills.

3 / 3

Total

11

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a solid, highly actionable skill with production-ready Databricks SDK patterns featuring real imports, typed exceptions, and executable code. Its main weaknesses are the monolithic structure (all patterns inline rather than split across files) and the lack of explicit validation checkpoints between steps. The content could be tightened slightly by removing explanatory sentences that Claude wouldn't need.

Suggestions

Add validation checkpoints such as 'verify client connectivity with w.current_user.me() before proceeding' between steps, especially before cluster creation or job submission.

Consider splitting the detailed patterns (error handling, job builder, cluster lifecycle) into separate referenced files, keeping SKILL.md as a concise overview with quick-start examples and links to each pattern.

DimensionReasoningScore

Conciseness

The skill is mostly efficient with real executable code, but includes some unnecessary commentary (e.g., 'Each WorkspaceClient holds an HTTP session and re-authenticates' is somewhat explanatory). The Overview section restates what the steps already show. The Result dataclass pattern is quite verbose for what it accomplishes, though it does serve a real purpose.

2 / 3

Actionability

Every pattern includes fully executable, copy-paste-ready Python code with real SDK imports, typed dataclasses, and concrete usage examples. The code uses actual SDK exception classes, real API shapes, and specific method calls rather than pseudocode.

3 / 3

Workflow Clarity

Steps are clearly numbered and sequenced, but they are independent patterns rather than a connected workflow. There are no explicit validation checkpoints between steps (e.g., no 'verify client connectivity before proceeding' or 'validate job definition before submitting'). The cluster lifecycle context manager has implicit cleanup but no error recovery feedback loop for the job submission within it.

2 / 3

Progressive Disclosure

The content is well-structured with clear sections and a useful error handling reference table, but it's quite long (~200+ lines of code) with all patterns inline. The patterns for error handling, job building, and cluster management could be split into separate reference files. References to external resources and next steps are present but the skill itself is monolithic.

2 / 3

Total

9

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.