Audit automatically audits AI assistant code plugins for security vulnerabilities, best practices, AI assistant.md compliance, and quality standards when user mentions audit plugin, security review, or best practices check. specific to AI assistant-code-plugins repositor... Use when assessing security or running audits. Trigger with phrases like 'security scan', 'audit', or 'vulnerability'.
Install with Tessl CLI
npx tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill plugin-auditor55
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
82%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description has strong completeness with explicit 'Use when' guidance and good trigger terms, but suffers from moderate specificity issues (listing categories rather than concrete actions) and a truncated repository name that reduces clarity. The trigger terms are well-chosen for user discoverability, though some generic security terms could cause conflicts with other audit-related skills.
Suggestions
Replace abstract categories with concrete actions (e.g., 'scans for SQL injection, validates input sanitization, checks authentication patterns' instead of 'security vulnerabilities')
Fix the truncated text 'repositor...' to show the complete repository name for better distinctiveness
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (AI assistant code plugins) and lists several action categories (security vulnerabilities, best practices, compliance, quality standards), but these are somewhat abstract rather than concrete specific actions like 'extract text' or 'fill forms'. | 2 / 3 |
Completeness | Clearly answers both what (audits AI assistant code plugins for security, best practices, compliance, quality) AND when (explicit 'Use when...' clause with trigger phrases like 'security scan', 'audit', 'vulnerability'). | 3 / 3 |
Trigger Term Quality | Includes good coverage of natural trigger terms: 'audit plugin', 'security review', 'best practices check', 'security scan', 'audit', 'vulnerability' - these are terms users would naturally say when needing this functionality. | 3 / 3 |
Distinctiveness Conflict Risk | Specific to 'AI assistant-code-plugins repository' which provides some distinctiveness, but generic terms like 'security scan' and 'audit' could overlap with other security-focused skills. The truncated repository name ('repositor...') reduces clarity. | 2 / 3 |
Total | 10 / 12 Passed |
Implementation
22%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is a generic template with no actual content specific to plugin auditing, security review, or vulnerability scanning. Despite the description promising automated security audits and best practices checks, the body provides zero actionable guidance - no audit checklists, no security patterns to check, no code examples, and no specific workflows for the AI assistant-code-plugins repository context.
Suggestions
Add specific security audit checklist items (e.g., input validation, authentication checks, permission boundaries, data exposure risks) with concrete examples of what to look for
Include executable code or commands for running security scans, with example output showing how to interpret and report findings
Define a clear audit workflow with validation steps: what files to examine, what patterns indicate vulnerabilities, how to document findings, and severity classification criteria
Replace generic placeholder text with actual plugin-specific guidance referencing the AI assistant-code-plugins repository structure and compliance requirements
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is brief but in a hollow way - it's generic boilerplate that could apply to any skill. It doesn't waste tokens explaining concepts Claude knows, but it also doesn't provide any actual substance worth the tokens it uses. | 2 / 3 |
Actionability | The content is entirely vague and abstract with no concrete code, commands, or specific guidance. Instructions like 'Invoke this skill when trigger conditions are met' and 'Provide necessary context' give Claude nothing executable to work with for a security audit. | 1 / 3 |
Workflow Clarity | The 4-step process is generic placeholder text with no actual audit workflow. For a security auditing skill, there should be specific validation checkpoints, what to scan, how to report vulnerabilities - none of which are present. | 1 / 3 |
Progressive Disclosure | References to external files (errors.md, examples.md) are present and one-level deep, but the main content is so empty that the structure serves no purpose. The references use template variables ({baseDir}) suggesting incomplete implementation. | 2 / 3 |
Total | 6 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 13 / 16 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
metadata_version | 'metadata' field is not a dictionary | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 13 / 16 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.