Spark Job Creator - Auto-activating skill for Data Pipelines. Triggers on: spark job creator, spark job creator Part of the Data Pipelines skill category.
33
Quality
3%
Does it follow best practices?
Impact
82%
0.98xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./planned-skills/generated/11-data-pipelines/spark-job-creator/SKILL.mdQuality
Discovery
7%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is severely underdeveloped, essentially just restating the skill name without explaining capabilities or usage triggers. It provides no actionable information for Claude to determine when to select this skill over others. The redundant trigger terms and missing concrete actions make this description nearly useless for skill selection.
Suggestions
Add specific concrete actions like 'Creates PySpark job configurations, generates Spark submit scripts, configures executor and memory settings, builds data transformation pipelines'
Add a 'Use when...' clause with natural trigger terms: 'Use when the user mentions Spark jobs, PySpark, distributed data processing, Spark submit, or needs to create ETL pipelines for big data'
Remove the redundant trigger term and expand with variations users would naturally say: 'spark script', 'PySpark code', 'Spark configuration', 'big data job'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description only states 'Spark Job Creator' without describing any concrete actions. There are no specific capabilities listed like 'creates Spark jobs', 'configures executors', or 'generates PySpark scripts'. | 1 / 3 |
Completeness | The description fails to answer 'what does this do' beyond the name, and provides no explicit 'when to use' guidance. The 'Triggers on' field just repeats the skill name rather than providing meaningful trigger scenarios. | 1 / 3 |
Trigger Term Quality | The trigger terms are redundant ('spark job creator, spark job creator') and lack natural variations users would say like 'PySpark', 'Spark script', 'data processing job', 'ETL pipeline', or 'distributed computing'. | 1 / 3 |
Distinctiveness Conflict Risk | While 'Spark' is somewhat specific to Apache Spark, the generic 'Data Pipelines' category and lack of detail could cause overlap with other ETL, data processing, or pipeline-related skills. | 2 / 3 |
Total | 5 / 12 Passed |
Implementation
0%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill content is entirely meta-descriptive boilerplate with no actual instructional value. It describes what a Spark job creator skill would do without providing any concrete guidance, code examples, or actionable steps for creating Spark jobs. The content fails on all dimensions as it contains no executable information.
Suggestions
Add concrete, executable PySpark code examples showing how to create a basic Spark job (e.g., SparkSession initialization, DataFrame operations, job submission)
Include a clear workflow with numbered steps: environment setup, job configuration, testing locally, deployment to cluster, and validation
Provide specific configuration examples (spark-submit commands, cluster configs, resource allocation settings) rather than vague claims about 'production-ready configurations'
Reference external files for advanced topics (e.g., STREAMING.md for Spark Streaming, OPTIMIZATION.md for performance tuning) to enable progressive disclosure
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is padded with generic boilerplate that provides no actual value. Phrases like 'provides automated assistance' and 'follows industry best practices' are vague filler that Claude already understands conceptually. | 1 / 3 |
Actionability | There is zero concrete guidance - no code examples, no specific commands, no actual instructions for creating Spark jobs. The content only describes what the skill claims to do without showing how to do anything. | 1 / 3 |
Workflow Clarity | No workflow is defined. Despite claiming to provide 'step-by-step guidance,' there are no actual steps, sequences, or validation checkpoints for creating Spark jobs. | 1 / 3 |
Progressive Disclosure | The content is a monolithic block of meta-description with no references to detailed materials, no links to examples, and no structured navigation to actual implementation guidance. | 1 / 3 |
Total | 4 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
0c08951
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.