Execute comprehensive platform migrations to Databricks from legacy systems. Use when migrating from on-premises Hadoop, other cloud platforms, or legacy data warehouses to Databricks. Trigger with phrases like "migrate to databricks", "hadoop migration", "snowflake to databricks", "legacy migration", "data warehouse migration".
85
83%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Quality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-structured skill description with strong trigger terms, explicit 'Use when' guidance, and clear distinctiveness. Its main weakness is the lack of specific concrete actions—'execute comprehensive platform migrations' is somewhat vague about what the skill actually does step-by-step. Adding specific migration actions (e.g., schema conversion, ETL pipeline translation, data validation) would elevate the specificity.
Suggestions
Replace 'execute comprehensive platform migrations' with specific concrete actions such as 'convert ETL pipelines, migrate table schemas, translate SQL dialects, validate data integrity, and configure Databricks workspaces.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description names the domain (platform migrations to Databricks) and mentions source systems (Hadoop, cloud platforms, legacy data warehouses), but lacks specific concrete actions beyond 'execute comprehensive platform migrations.' It doesn't list discrete steps like 'convert ETL pipelines, migrate schemas, transfer data, validate outputs.' | 2 / 3 |
Completeness | Clearly answers both 'what' (execute comprehensive platform migrations to Databricks from legacy systems) and 'when' (explicit 'Use when' clause specifying migration scenarios, plus a 'Trigger with phrases' section listing concrete trigger terms). | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms including 'migrate to databricks', 'hadoop migration', 'snowflake to databricks', 'legacy migration', 'data warehouse migration'. These are phrases users would naturally say when requesting this type of work. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with a clear niche: Databricks-specific migrations from named source platforms. The combination of 'Databricks' + 'migration' + specific source systems makes it unlikely to conflict with general data engineering or other platform skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a strong, highly actionable migration skill with executable code for each step and good workflow sequencing including validation and rollback procedures. Its main weakness is that it's a long monolithic document that would benefit from splitting detailed source-specific migration patterns into separate files. The content is mostly efficient but could trim some explanatory text and soft prerequisites.
Suggestions
Split source-specific migration patterns (Snowflake, Redshift, JDBC/Oracle/Teradata) into separate reference files and link from the main SKILL.md to improve progressive disclosure.
Remove soft prerequisites like 'Stakeholder alignment on migration timeline' that aren't actionable for Claude, and trim the Overview paragraph that duplicates the migration patterns table.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is fairly comprehensive but includes some unnecessary verbosity—the Overview section restates what the migration patterns table already shows, the Prerequisites list includes soft items like 'stakeholder alignment' that Claude doesn't need, and some code comments are explanatory rather than instructive. However, it's not egregiously padded and most content earns its place. | 2 / 3 |
Actionability | The skill provides fully executable Python/SQL code for each migration step—discovery, schema conversion, data migration with multiple methods (SYNC, DEEP CLONE, CTAS, JDBC), Snowflake/Redshift-specific patterns, and bulk migration scripts. Code is copy-paste ready with concrete examples and specific function signatures. | 3 / 3 |
Workflow Clarity | The 6-step workflow is clearly sequenced from discovery through cutover, with explicit validation at each stage (row count matching in Step 3, validation queries in Step 6). The cutover plan includes rollback procedures for each step, and the migrate_table function has built-in validation with status reporting. Error handling table covers common failure modes with solutions. | 3 / 3 |
Progressive Disclosure | The skill is a monolithic document at ~250 lines with no bundle files to offload detailed content. The migration patterns for Snowflake, Redshift, JDBC, and ETL conversion could each be separate reference files. The Resources section links to external docs but there's no internal file structure for progressive discovery. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3a2d27d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.