Process use when you need to archive historical database records to reduce primary database size. This skill automates moving old data to archive tables or cold storage (S3, Azure Blob, GCS). Trigger with phrases like "archive old database records", "implement data retention policy", "move historical data to cold storage", or "reduce database size with archival".
86
85%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Quality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-crafted skill description that clearly defines its purpose (archiving historical database records), specifies concrete actions (moving data to archive tables, cold storage on S3/Azure Blob/GCS), and provides explicit trigger phrases. The description is concise, uses third-person voice correctly, and occupies a clear niche that minimizes conflict with other skills. The only minor issue is the slightly awkward opening 'Process use when' phrasing, but it doesn't materially impact functionality.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: archiving historical database records, moving old data to archive tables or cold storage (S3, Azure Blob, GCS), reducing primary database size, and implementing data retention policies. | 3 / 3 |
Completeness | Clearly answers both 'what' (automates moving old data to archive tables or cold storage) and 'when' (explicit trigger phrases provided, plus a 'Use when' equivalent in the opening sentence). Both dimensions are well-covered. | 3 / 3 |
Trigger Term Quality | Includes strong natural trigger phrases users would actually say: 'archive old database records', 'implement data retention policy', 'move historical data to cold storage', 'reduce database size with archival'. These cover multiple natural variations of how a user might phrase this need. | 3 / 3 |
Distinctiveness Conflict Risk | Highly specific niche: database record archival and data retention. The triggers are distinct and unlikely to conflict with general database management, backup, or migration skills due to the focus on archival and cold storage specifically. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
70%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a highly actionable and well-sequenced skill with concrete SQL commands and clear validation checkpoints throughout the archival workflow. Its main weakness is that it's a long monolithic document (~150 lines of dense content) that would benefit significantly from splitting into separate files for cloud storage, compliance, and retrieval procedures. Some verbosity in the examples and prerequisites sections could be trimmed.
Suggestions
Split cloud storage archival (step 6), retrieval procedures (step 9), and GDPR compliance into separate referenced files to improve progressive disclosure and reduce the main file's length.
Trim the examples section to focus on the archival pattern rather than hypothetical performance numbers (e.g., '40% improvement', '80% reduction') which are speculative and not actionable.
Remove or condense prerequisites that Claude already knows (e.g., what AWS CLI and psql are) to improve conciseness.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is fairly comprehensive but includes some unnecessary verbosity. The examples section describes hypothetical scenarios with specific percentages and numbers that are more illustrative than instructive. The prerequisites section explains things Claude would know (like what CLI tools are). However, the core instructions are reasonably dense with useful SQL commands. | 2 / 3 |
Actionability | The skill provides concrete, executable SQL commands and shell commands throughout every step. From identifying archival candidates to creating archive tables, performing batch operations, cloud uploads, and vacuuming — each step has copy-paste ready commands. The error handling table provides specific solutions for each failure mode. | 3 / 3 |
Workflow Clarity | The 10-step workflow is clearly sequenced with explicit validation checkpoints: verifying row counts match before deletion (step 5), wrapping in transactions for atomicity, checking for active foreign key references before archiving (step 3), and running VACUUM after completion (step 8). The feedback loop of verify-then-delete is well-articulated, and batch processing with restart capability addresses error recovery. | 3 / 3 |
Progressive Disclosure | The skill is a monolithic wall of text with no bundle files and no references to separate documents for detailed topics. The cloud storage archival, GDPR compliance, retrieval procedures, and error handling could all be split into separate referenced files. Everything is inline in a single long document, making it harder to navigate. | 1 / 3 |
Total | 9 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3a2d27d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.