CtrlK
BlogDocsLog inGet started
Tessl Logo

databricks-rate-limits

Implement Databricks API rate limiting, backoff, and idempotency patterns. Use when handling rate limit errors, implementing retry logic, or optimizing API request throughput for Databricks. Trigger with phrases like "databricks rate limit", "databricks throttling", "databricks 429", "databricks retry", "databricks backoff".

84

Quality

82%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a well-crafted skill description that clearly defines its scope (Databricks API rate limiting and retry patterns), provides explicit 'Use when' guidance, and includes highly specific trigger phrases. It follows third-person voice, is concise without being vague, and would be easily distinguishable from other skills in a large collection.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'rate limiting', 'backoff', and 'idempotency patterns' for Databricks API. These are distinct, well-defined technical capabilities.

3 / 3

Completeness

Clearly answers both 'what' (implement rate limiting, backoff, and idempotency patterns) and 'when' (handling rate limit errors, implementing retry logic, optimizing API request throughput) with explicit trigger phrases.

3 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms including 'databricks rate limit', 'databricks throttling', 'databricks 429', 'databricks retry', 'databricks backoff' — these are exactly what users would say when encountering these issues.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive — scoped specifically to Databricks API rate limiting and retry patterns. The combination of 'Databricks' + 'rate limit/throttling/429/retry/backoff' creates a very clear niche unlikely to conflict with other skills.

3 / 3

Total

12

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a strong, highly actionable skill with excellent executable code examples covering the full spectrum of Databricks rate limiting patterns. Its main weaknesses are moderate verbosity (some sections could be tightened) and missing validation/dry-run checkpoints for destructive batch operations like cluster deletion. The content would benefit from splitting detailed implementations into a reference file while keeping the SKILL.md as a concise overview.

Suggestions

Add a dry-run/validation step before destructive batch operations like cluster cleanup (e.g., list targets first, confirm count, then delete) to improve workflow safety.

Consider moving the RateLimiter class and batch processing utilities into a separate reference file, keeping SKILL.md as a concise overview with links to detailed implementations.

DimensionReasoningScore

Conciseness

The content is mostly efficient with executable code and useful tables, but includes some unnecessary elements like the 'Prerequisites' section (Claude knows what databricks-sdk is), the 'Understanding of async patterns' note, and the 'Output' section which just restates what was already shown. The rate limit tiers table is valuable domain knowledge, but some print statements and comments could be trimmed.

2 / 3

Actionability

Excellent actionability — every section provides fully executable, copy-paste-ready Python code with proper imports, concrete examples, and real Databricks SDK usage patterns. The decorator, RateLimiter class, batch processor, and idempotent submission are all complete and immediately usable.

3 / 3

Workflow Clarity

Steps are clearly numbered and sequenced from understanding limits through backoff, rate limiting, batch processing, and idempotency. However, for bulk/destructive operations like the cluster cleanup example, there's no validation checkpoint (e.g., dry-run first, confirm count before deleting). The batch_run_jobs function also lacks proper error propagation from futures — missing feedback loops for batch operations caps this at 2.

2 / 3

Progressive Disclosure

The content is well-structured with clear sections and a useful error handling table, but it's quite long (~180 lines of substantive content) and could benefit from splitting the detailed examples and the RateLimiter class into separate reference files. The 'Next Steps' reference to databricks-security-basics is good, but the main file tries to cover everything inline.

2 / 3

Total

9

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.