CtrlK
BlogDocsLog inGet started
Tessl Logo

neo4j-spark-skill

Use when reading from or writing to Neo4j with Apache Spark or Databricks using the Neo4j Connector for Apache Spark (org.neo4j:neo4j-connector-apache-spark). Covers SparkSession setup, DataFrame reads via labels/Cypher/relationship scan, DataFrame writes with SaveMode, node.keys for MERGE, relationship write mapping, partition and batch tuning, PySpark and Scala examples, Databricks cluster config, Databricks secrets for credentials, Delta Lake to Neo4j pipelines. Does NOT handle Cypher authoring — use neo4j-cypher-skill. Does NOT handle the Python bolt driver — use neo4j-driver-python-skill. Does NOT handle GDS algorithms — use neo4j-gds-skill.

90

1.10x
Quality

88%

Does it follow best practices?

Impact

92%

1.10x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an excellent skill description that hits all the marks. It opens with a clear 'Use when' trigger clause, enumerates a comprehensive list of specific capabilities, includes abundant natural trigger terms, and explicitly delineates boundaries with related skills via 'Does NOT handle' clauses. This description would allow Claude to confidently and accurately select this skill from a large pool of Neo4j-related or Spark-related skills.

DimensionReasoningScore

Specificity

Lists numerous specific concrete actions: SparkSession setup, DataFrame reads via labels/Cypher/relationship scan, DataFrame writes with SaveMode, node.keys for MERGE, relationship write mapping, partition and batch tuning, PySpark and Scala examples, Databricks cluster config, Databricks secrets for credentials, Delta Lake to Neo4j pipelines.

3 / 3

Completeness

Clearly answers both 'what' (covers SparkSession setup, DataFrame reads/writes, partition tuning, etc.) and 'when' (explicit 'Use when reading from or writing to Neo4j with Apache Spark or Databricks'). Also explicitly states what it does NOT handle with cross-references to other skills, further clarifying when to use it.

3 / 3

Trigger Term Quality

Excellent coverage of natural terms users would say: Neo4j, Apache Spark, Databricks, SparkSession, DataFrame, PySpark, Scala, Delta Lake, MERGE, Cypher, SaveMode, neo4j-connector-apache-spark. These are the exact terms a user working in this domain would use.

3 / 3

Distinctiveness Conflict Risk

Extremely distinctive with a clear niche (Neo4j + Spark/Databricks connector). The explicit 'Does NOT handle' clauses with references to neo4j-cypher-skill, neo4j-driver-python-skill, and neo4j-gds-skill create sharp boundaries that minimize conflict risk with related skills.

3 / 3

Total

12

/

12

Passed

Implementation

77%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a strong, highly actionable skill with excellent executable examples covering all major use cases (reads, writes, Databricks setup, Delta Lake pipelines). The workflow clarity is good with explicit sequencing and a comprehensive checklist. The main weakness is that the file is quite long for a SKILL.md — it functions more as a complete reference than an overview with progressive disclosure, and the referenced bundle files don't exist.

Suggestions

Move the version matrix, full configuration options table, and common errors table into referenced files (e.g., references/config.md, references/troubleshooting.md) to slim down the main SKILL.md

Provide the referenced bundle files (references/read-patterns.md and references/write-patterns.md) or remove the broken references

DimensionReasoningScore

Conciseness

The skill is mostly efficient with good use of tables and code blocks, but includes some redundancy (PySpark and Scala examples for nearly identical operations, version matrix details that could be in a reference file). The content is long (~300 lines) and some sections like the full config options table could be offloaded to references.

2 / 3

Actionability

Excellent actionability throughout — every operation has fully executable, copy-paste-ready code examples in both PySpark and Scala. Specific Maven coordinates, exact option strings, concrete DataFrame examples with sample data, and precise Databricks setup steps are all provided.

3 / 3

Workflow Clarity

Multi-step workflows are clearly sequenced (e.g., Delta Lake → Neo4j pipeline: write nodes first, then relationships). Validation checkpoints are present via the checklist, common errors table with fixes, and explicit warnings (coalesce(1) for deadlocks, uniqueness constraints before MERGE). The 'nodes before relationships' ordering is explicitly called out.

3 / 3

Progressive Disclosure

The skill references two external files (references/read-patterns.md and references/write-patterns.md) which is good structure, but no bundle files are provided so these references are broken. The main file itself is quite long and could benefit from moving the version matrix, full config options table, and common errors into reference files to keep the SKILL.md as a leaner overview.

2 / 3

Total

10

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
neo4j-contrib/neo4j-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.