Process use when you need to archive historical database records to reduce primary database size. This skill automates moving old data to archive tables or cold storage (S3, Azure Blob, GCS). Trigger with phrases like "archive old database records", "implement data retention policy", "move historical data to cold storage", or "reduce database size with archival".
89
88%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Quality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that clearly defines its purpose (archiving historical database records), specifies concrete actions and storage targets (archive tables, S3, Azure Blob, GCS), and provides explicit trigger phrases. The only minor issue is the awkward phrasing at the start ('Process use when you need to') which mixes imperative and second-person voice, though the rest uses appropriate third-person framing.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: archiving historical database records, moving old data to archive tables or cold storage (S3, Azure Blob, GCS), reducing primary database size, and implementing data retention policies. | 3 / 3 |
Completeness | Clearly answers both 'what' (automates moving old data to archive tables or cold storage) and 'when' (explicit trigger phrases and a 'Use when' equivalent at the start specifying the scenario of archiving historical database records). | 3 / 3 |
Trigger Term Quality | Includes highly natural trigger phrases users would say: 'archive old database records', 'implement data retention policy', 'move historical data to cold storage', 'reduce database size with archival'. These cover multiple natural variations of how a user might express this need. | 3 / 3 |
Distinctiveness Conflict Risk | Targets a very specific niche—database record archival and data retention—with distinct triggers like 'cold storage', 'archive tables', 'S3, Azure Blob, GCS', and 'data retention policy'. Unlikely to conflict with general database or file management skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a strong, highly actionable skill with excellent workflow clarity and concrete SQL/CLI commands throughout. Its main weakness is length—the content is comprehensive but could be more concise by trimming explanatory context Claude doesn't need and splitting detailed sections (cloud storage, error handling, examples) into referenced files. The validation checkpoints and error recovery guidance are particularly well done.
Suggestions
Trim the prerequisites section to only list required tools/permissions without explanatory context (e.g., remove 'Understanding of data retention requirements and compliance policies').
Move the cloud storage archival details (step 6), error handling table, and examples into separate referenced files to improve progressive disclosure and reduce the main file's token footprint.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is fairly detailed and mostly efficient, but includes some unnecessary context (e.g., explaining what archival candidates are, listing compliance acronyms Claude already knows, the verbose examples section with percentage improvements). The prerequisites section explains things like 'Understanding of data retention requirements' which is not actionable. Could be tightened by ~30%. | 2 / 3 |
Actionability | Provides concrete, executable SQL commands, CLI commands, and specific techniques throughout. Steps include real SQL queries (CREATE TABLE, INSERT INTO, COPY, VACUUM), shell commands (gzip, aws s3 cp), and specific batch size recommendations. The guidance is copy-paste ready and covers the full pipeline from identification to deletion. | 3 / 3 |
Workflow Clarity | The 10-step workflow is clearly sequenced with explicit validation checkpoints: verify row counts match before deletion (step 5), wrap in transactions for atomicity, verify no active references before archiving parents (step 3), and batch processing with progress logging for restart capability (step 7). The error handling table provides clear feedback loops for recovery scenarios. | 3 / 3 |
Progressive Disclosure | The content is well-structured with clear sections (Instructions, Output, Error Handling, Examples, Resources), but it's a monolithic document that could benefit from splitting detailed content into separate files. The cloud storage archival, error handling, and examples sections could be referenced files. At ~120 lines of dense content, this would benefit from a quick-start overview with references to detailed guides. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3e83543
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.