Expert security auditor for AI Skills and Bundles. Performs non-intrusive static analysis to identify malicious patterns, data leaks, system stability risks, and obfuscated payloads across Windows, macOS, Linux/Unix, and Mobile (Android/iOS).
46
33%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Critical
Do not install without reviewing
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/audit-skills/SKILL.mdQuality
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description is strong in specificity and distinctiveness, clearly carving out a unique niche for security auditing of AI Skills and Bundles with concrete actions listed. However, it lacks an explicit 'Use when...' clause which hurts completeness, and the trigger terms are somewhat technical rather than matching natural user language patterns like 'is this skill safe' or 'check this skill for malware'.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user wants to check if a skill or bundle is safe, scan for malware, or review a skill's security before installing.'
Include more natural user-facing trigger terms such as 'scan', 'safe', 'trust', 'malware', 'vulnerability', 'review skill security', or 'is this skill safe to use'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'non-intrusive static analysis', 'identify malicious patterns', 'data leaks', 'system stability risks', 'obfuscated payloads'. Also specifies the target domain ('AI Skills and Bundles') and platforms covered. | 3 / 3 |
Completeness | Clearly answers 'what does this do' (performs static analysis to identify security issues), but lacks an explicit 'Use when...' clause. The 'when' is only implied by the nature of the skill. Per rubric guidelines, missing 'Use when...' caps completeness at 2. | 2 / 3 |
Trigger Term Quality | Includes some relevant terms like 'security audit', 'malicious patterns', 'data leaks', 'static analysis', but misses common user-facing trigger terms like 'scan', 'vulnerability', 'safe', 'trust', 'review skill', or 'check for malware'. The terms lean more technical/formal than what users would naturally say. | 2 / 3 |
Distinctiveness Conflict Risk | Very clear niche: security auditing specifically for 'AI Skills and Bundles'. This is a highly distinct domain that is unlikely to conflict with general code review, general security, or other skills. The combination of security auditing + AI Skills/Bundles is quite unique. | 3 / 3 |
Total | 10 / 12 Passed |
Implementation
0%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill reads more like a threat taxonomy reference document than an actionable skill. It lists extensive patterns Claude likely already knows without providing a concrete audit workflow, report format, or executable examples. The content is bloated with inline reference material that should be split into separate files, and the actual audit procedure is never defined.
Suggestions
Replace the vague 3-step workflow with a concrete, sequenced audit procedure including specific file-reading steps, pattern matching approach, and an explicit report template with the 0-10 scoring criteria defined
Move the extensive threat pattern lists (sections 1-9) into a separate THREAT_PATTERNS.md reference file and keep only a brief summary in the main skill
Add a complete input/output example showing a real skill snippet being audited with the resulting security report, including how the score is calculated
Remove the 'When to Use', 'Best Practices', and 'Common Pitfalls' sections which are largely obvious padding, and replace with actionable constraints like specific file paths to check and exact output format
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with extensive lists of commands and patterns that Claude already knows. The overview repeats the description, 'When to Use' is padded, and the threat detection section is a massive reference dump that could be in a separate file. The '2-4 sentences is perfect' leftover text is a template artifact. | 1 / 3 |
Actionability | Despite listing many specific commands and patterns, the skill never provides concrete executable steps for actually performing an audit. The examples are just vague prompts ('Perform a security audit on this skill bundle') with no input/output examples, no actual audit procedure, no code to run, and no report template. | 1 / 3 |
Workflow Clarity | The three 'steps' (Static Analysis, Platform-Specific Threat Detection, Reporting) are vague labels without actionable sequences. There are no validation checkpoints, no feedback loops for when threats are found, and no clear procedure for how to actually conduct the audit or generate the report with the 0-10 score. | 1 / 3 |
Progressive Disclosure | The skill is a monolithic wall of text with all threat categories inlined. The massive platform-specific threat detection lists (sections 1-9) should be in separate reference files. There's no clear navigation structure, and the single reference to CATALOG.md is unexplained. | 1 / 3 |
Total | 4 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
7241463
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.