CtrlK
BlogDocsLog inGet started
Tessl Logo

databricks-spark-structured-streaming

Comprehensive guide to Spark Structured Streaming for production workloads. Use when building streaming pipelines, working with Kafka ingestion, implementing Real-Time Mode (RTM), configuring triggers (processingTime, availableNow), handling stateful operations with watermarks, optimizing checkpoints, performing stream-stream or stream-static joins, writing to multiple sinks, or tuning streaming cost and performance.

95

Quality

93%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an excellent skill description that clearly defines its scope (Spark Structured Streaming for production workloads), lists numerous specific capabilities, and provides an explicit 'Use when...' clause with rich, natural trigger terms. It uses proper third-person voice and is both comprehensive and distinctive, making it easy for Claude to select appropriately from a large skill set.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: building streaming pipelines, Kafka ingestion, implementing Real-Time Mode, configuring triggers (with specific types), handling stateful operations with watermarks, optimizing checkpoints, performing stream-stream/stream-static joins, writing to multiple sinks, and tuning cost/performance.

3 / 3

Completeness

Clearly answers both 'what' (comprehensive guide to Spark Structured Streaming for production workloads) and 'when' with an explicit 'Use when...' clause listing numerous specific trigger scenarios.

3 / 3

Trigger Term Quality

Excellent coverage of natural terms a user would say: 'streaming pipelines', 'Kafka', 'Real-Time Mode', 'RTM', 'triggers', 'processingTime', 'availableNow', 'watermarks', 'checkpoints', 'stream-stream joins', 'stream-static joins', 'sinks', 'streaming cost and performance'. These are terms practitioners naturally use.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with a clear niche: Spark Structured Streaming specifically. The domain-specific terminology (Kafka, watermarks, checkpoints, processingTime, availableNow, RTM) makes it very unlikely to conflict with other skills.

3 / 3

Total

12

/

12

Passed

Implementation

87%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured skill that excels at progressive disclosure and conciseness, serving as an effective hub document for Spark Structured Streaming. The quick-start code is immediately actionable and the production checklist adds concrete value. The main weakness is the lack of explicit workflow sequencing with validation steps for setting up or modifying streaming pipelines.

Suggestions

Consider adding a brief numbered workflow for setting up a new streaming pipeline (e.g., 1. Define schema, 2. Configure source, 3. Test with trigger(once=True), 4. Validate output, 5. Switch to production trigger) to improve workflow clarity.

DimensionReasoningScore

Conciseness

The content is lean and efficient. It provides a quick-start code example without explaining what Kafka or Spark Streaming is, assumes Claude's competence, and uses tables for navigation rather than verbose prose. Every section earns its place.

3 / 3

Actionability

The quick-start example is fully executable, copy-paste ready Python code showing a complete Kafka-to-Delta pipeline. The production checklist provides specific, concrete guidance (e.g., 'UC volumes, not DBFS', 'fixed-size cluster, no autoscaling').

3 / 3

Workflow Clarity

The production checklist provides validation checkpoints, but the skill itself doesn't define a clear multi-step workflow with sequencing and feedback loops. For a streaming pipeline skill involving potentially destructive operations (checkpoints, merges), there's no explicit validate-then-proceed sequence in the main content.

2 / 3

Progressive Disclosure

Excellent progressive disclosure with a concise overview, quick-start code, and well-organized tables pointing to one-level-deep references for each detailed topic (kafka-streaming.md, stream-stream-joins.md, etc.). Navigation is clear and well-signaled.

3 / 3

Total

11

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
databricks-solutions/ai-dev-kit
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.