Deep-dive data profiling for a specific table. Use when the user asks to profile a table, wants statistics about a dataset, asks about data quality, or needs to understand a table's structure and content. Requires a table name.
78
73%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/profiling-tables/SKILL.mdQuality
Discovery
82%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a solid description with a clear 'Use when...' clause and good trigger term coverage. Its main weakness is that the capability description is somewhat high-level ('deep-dive data profiling') without enumerating the specific profiling actions performed, and it could potentially overlap with broader data analysis skills.
Suggestions
Add specific concrete actions to improve specificity, e.g., 'Computes column statistics, null rates, value distributions, cardinality, and data type analysis for a specific table.'
Differentiate more clearly from general data analysis skills by emphasizing this is specifically about profiling/quality assessment rather than querying or transforming data.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description names the domain ('data profiling') and mentions some actions like 'statistics about a dataset' and 'understand a table's structure and content', but doesn't list specific concrete actions like computing null counts, distribution analysis, cardinality checks, or schema inspection. | 2 / 3 |
Completeness | Clearly answers both what ('Deep-dive data profiling for a specific table') and when ('Use when the user asks to profile a table, wants statistics about a dataset, asks about data quality, or needs to understand a table's structure and content'). Also includes a prerequisite ('Requires a table name'). | 3 / 3 |
Trigger Term Quality | Includes strong natural trigger terms: 'profile a table', 'statistics about a dataset', 'data quality', 'table's structure and content', 'table name'. These are terms users would naturally use when requesting this kind of analysis. | 3 / 3 |
Distinctiveness Conflict Risk | While 'data profiling' and 'table' are somewhat specific, this could overlap with general data analysis, data exploration, or schema inspection skills. The term 'statistics about a dataset' is broad enough to potentially conflict with analytics-focused skills. | 2 / 3 |
Total | 10 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid, actionable skill with well-structured SQL examples covering the full profiling workflow. Its main weaknesses are the lack of validation/error-handling steps (e.g., handling very large tables, query failures, or permission issues) and some verbosity in explanatory sections that Claude wouldn't need. The output template is a nice touch but the overall document could be tighter.
Suggestions
Add validation checkpoints: e.g., check row count first and adjust strategy for very large tables (sampling instead of full scans), handle permission errors gracefully.
Remove explanatory text Claude already knows, such as the 'This reveals:' bullets under cardinality analysis and the detailed descriptions under each data quality dimension (Completeness, Uniqueness, etc.).
Consider splitting the per-data-type SQL patterns and the output template into referenced files to keep the main skill leaner.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is mostly efficient with concrete SQL examples, but includes some unnecessary explanatory text (e.g., 'This reveals:' bullet points explaining what cardinality analysis shows, and the data quality assessment dimensions that Claude would already understand). The output template section is also somewhat verbose. | 2 / 3 |
Actionability | Provides fully executable SQL queries for each step, with specific patterns for different data types (numeric, string, date). The queries are copy-paste ready with clear placeholder conventions (<table>, <schema>, column_name). | 3 / 3 |
Workflow Clarity | Steps are clearly sequenced (1-7) with a logical progression from metadata to quality assessment to output. However, there are no validation checkpoints or feedback loops — no guidance on what to do if queries fail, if the table is too large for certain operations, or how to handle errors in the profiling process. | 2 / 3 |
Progressive Disclosure | The content is well-structured with clear headers and sub-sections, but it's a fairly long monolithic document (~120 lines). The data quality assessment dimensions and output template could potentially be split into referenced files, and the per-data-type SQL patterns could be in a reference file. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
0642adb
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.