CtrlK
BlogDocsLog inGet started
Tessl Logo

spark-optimization

Optimize Apache Spark jobs with partitioning, caching, shuffle optimization, and memory tuning. Use when improving Spark performance, debugging slow jobs, or scaling data processing pipelines.

77

1.28x
Quality

71%

Does it follow best practices?

Impact

77%

1.28x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./tests/ext_conformance/artifacts/agents-wshobson/data-engineering/skills/spark-optimization/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a well-crafted skill description that concisely covers specific Spark optimization techniques, includes natural trigger terms users would employ, and clearly delineates both what the skill does and when to use it. The description is distinctive enough to avoid conflicts with other data engineering or general performance tuning skills.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: partitioning, caching, shuffle optimization, and memory tuning. These are distinct, well-defined Spark optimization techniques.

3 / 3

Completeness

Clearly answers both what ('Optimize Apache Spark jobs with partitioning, caching, shuffle optimization, and memory tuning') and when ('Use when improving Spark performance, debugging slow jobs, or scaling data processing pipelines') with explicit trigger guidance.

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'Spark', 'partitioning', 'caching', 'shuffle', 'memory tuning', 'performance', 'slow jobs', 'data processing pipelines'. These cover common terms a user would use when seeking Spark optimization help.

3 / 3

Distinctiveness Conflict Risk

Clearly scoped to Apache Spark optimization specifically, with domain-specific triggers like 'Spark', 'shuffle optimization', 'partitioning' that are unlikely to conflict with general data processing or other big data skills.

3 / 3

Total

12

/

12

Passed

Implementation

42%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill excels at actionability with comprehensive, executable code examples covering all major Spark optimization areas. However, it is far too verbose for a SKILL.md—it reads more like a complete tutorial or reference guide than a concise skill file. The lack of progressive disclosure (everything crammed into one file) and missing diagnostic workflow (no clear 'diagnose → identify bottleneck → apply fix → verify' sequence) significantly reduce its effectiveness as a skill.

Suggestions

Reduce the SKILL.md to a concise overview with a diagnostic workflow (identify bottleneck → choose pattern → apply → verify), and move detailed patterns into separate files (e.g., JOINS.md, CACHING.md, MEMORY.md) linked from the overview.

Remove the 'Core Concepts' section entirely—Claude already understands Spark's execution model, what shuffles are, and basic performance factors.

Add an explicit diagnostic/verification workflow: e.g., '1. Check Spark UI for skew/spills 2. Identify bottleneck category 3. Apply relevant pattern 4. Re-run and compare stage metrics to verify improvement'.

Remove inline comments that restate obvious things (e.g., '# Cache in memory (MEMORY_AND_DISK is default)', '# Fast compression') and the storage levels explanation list.

DimensionReasoningScore

Conciseness

The skill is extremely verbose at ~350+ lines. It explains basic Spark concepts Claude already knows (execution model, what shuffles are, storage level definitions), includes redundant tables of concepts, and has extensive inline comments that restate obvious things. The 'Core Concepts' section with the execution model diagram and key performance factors table adds little value for Claude.

1 / 3

Actionability

The skill provides fully executable Python code throughout, with concrete configurations, specific function implementations (salt_join, calculate_partitions, check_partition_skew), and copy-paste ready spark-submit configurations. Every pattern includes working code examples.

3 / 3

Workflow Clarity

While individual patterns are clear, there's no overarching workflow for diagnosing and fixing Spark performance issues. The patterns are presented as isolated techniques without a clear decision tree or sequence. There are no explicit validation checkpoints—e.g., after applying optimizations, there's no 'verify improvement by checking X metric' step.

2 / 3

Progressive Disclosure

This is a monolithic wall of content with everything inline. Seven detailed patterns, a configuration cheat sheet, best practices, and resources are all in one file. The patterns (join optimization, caching, memory tuning, shuffle optimization, etc.) could each be separate reference files linked from a concise overview. External links at the bottom don't count as progressive disclosure of the skill's own content.

1 / 3

Total

7

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
Dicklesworthstone/pi_agent_rust
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.