Detect abnormal access patterns in AWS S3, GCS, and Azure Blob Storage by analyzing CloudTrail Data Events, GCS audit logs, and Azure Storage Analytics. Identifies after-hours bulk downloads, access from new IP addresses, unusual API calls (GetObject spikes), and potential data exfiltration using statistical baselines and time-series anomaly detection.
64
56%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/analyzing-cloud-storage-access-patterns/SKILL.mdQuality
Discovery
82%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong, highly specific description that clearly articulates what the skill does with concrete actions, specific cloud providers, log sources, and detection techniques. Its main weakness is the absence of an explicit 'Use when...' clause, which would help Claude know exactly when to select this skill. The trigger terms are excellent and domain-appropriate, and the skill occupies a very distinct niche.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about suspicious cloud storage activity, S3 access anomalies, data exfiltration detection, or unusual download patterns in AWS, GCS, or Azure.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: detecting abnormal access patterns, analyzing CloudTrail Data Events/GCS audit logs/Azure Storage Analytics, identifying after-hours bulk downloads, access from new IP addresses, unusual API calls (GetObject spikes), and potential data exfiltration using statistical baselines and time-series anomaly detection. | 3 / 3 |
Completeness | The 'what' is thoroughly covered with specific capabilities and techniques, but there is no explicit 'Use when...' clause or equivalent trigger guidance telling Claude when to select this skill. The 'when' is only implied by the described capabilities. | 2 / 3 |
Trigger Term Quality | Excellent coverage of natural keywords users would say: AWS S3, GCS, Azure Blob Storage, CloudTrail, data exfiltration, bulk downloads, anomaly detection, unusual API calls, GetObject spikes, new IP addresses, after-hours access. These are terms a security analyst would naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with a clear niche: cloud storage anomaly detection across specific providers (AWS S3, GCS, Azure Blob). The combination of specific log sources, detection patterns, and cloud storage focus makes it very unlikely to conflict with other skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
29%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill provides a reasonable high-level outline for S3 access pattern analysis with some useful concrete thresholds, but falls short on actionability by lacking executable analysis code and on workflow clarity by missing validation/verification steps. It also fails to deliver on its multi-cloud promise (GCS, Azure) and wastes tokens on generic prerequisites that Claude doesn't need.
Suggestions
Add executable Python code for the core analysis steps (querying CloudTrail, building baselines, detecting anomalies) rather than just describing them in prose.
Add validation checkpoints: e.g., verify CloudTrail data event logging is enabled, confirm baseline data sufficiency before anomaly detection, validate output report schema.
Either add GCS and Azure coverage as referenced in the description (with links to separate files for each cloud provider), or narrow the scope to AWS-only.
Remove the generic 'When to Use' and 'Prerequisites' sections and replace with a brief one-liner scope statement to save tokens.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The 'When to Use' and 'Prerequisites' sections contain generic filler that Claude already knows (e.g., 'Familiarity with cloud security concepts', 'Access to a test or lab environment'). The core instructions are reasonably lean but the surrounding content wastes tokens. | 2 / 3 |
Actionability | The instructions provide some concrete guidance (specific thresholds like >100 GetObject calls, 30-day IP history, time windows) and a CLI command, but lack executable code for the actual analysis logic. Steps 2-5 are descriptive rather than providing copy-paste ready code or queries. The example JSON event is illustrative but not tied to actionable processing code. | 2 / 3 |
Workflow Clarity | The steps are listed but lack validation checkpoints, error handling, or feedback loops. For a security analysis workflow involving potentially large datasets and multi-cloud environments (the description mentions GCS and Azure but the content only covers AWS), there are no verification steps, no guidance on what to do when anomalies are ambiguous, and no clear sequence for iterating on findings. | 1 / 3 |
Progressive Disclosure | The skill references `scripts/agent.py` but provides no link or explanation of where to find it. There are no references to additional documentation for GCS or Azure analysis despite the description promising multi-cloud coverage. The content is a flat document with no navigation to deeper resources. | 1 / 3 |
Total | 6 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
c15f73d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.