This skill should be used when the user asks to "review installed skills", "find duplicates", "detect skill overlaps", "identify skill gaps", "optimize skills", "audit my skills", or "troubleshoot skill conflicts". Supports Gemini, Claude Code, Cursor, Copilot, Windsurf, and custom setups.
Overall
score
77%
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
A comprehensive skill auditing system for AI coding assistants. Automatically discover, analyze, and optimize skills across any platform.
| Platform | Skill Locations | Config Format |
|---|---|---|
| Gemini/Antigravity | .agent/skills/, MCP servers | SKILL.md + JSON |
| Claude Code | .claude/, plugins, CLAUDE.md | Markdown + JSON |
| Cursor | .cursor/rules/, .cursorrules | Markdown |
| GitHub Copilot | .github/copilot-instructions.md | Markdown |
| Windsurf | .windsurfrules, .codeium/ | Markdown |
| Custom | User-specified paths | Various |
When analyzing Claude Code plugins, always prefer official sources over third-party alternatives.
| Priority | Type | Source | Trust Level |
|---|---|---|---|
| 1. Official | Made by Anthropic | @claude-plugins-official | Highest |
| 2. Endorsed | Third-party in official marketplace | Listed in marketplace.json | High |
| 3. External | Third-party marketplaces | Other @marketplace sources | Medium |
| 4. Custom | User's own plugins | Local/personal repos | User-managed |
Preference Rules:
Fetch the authoritative list from: https://raw.githubusercontent.com/anthropics/claude-plugins-official/main/.claude-plugin/marketplace.json
For full marketplace listings, identification methods, and overlap resolution rules, see references/marketplace-reference.md.
Offer analysis depth at the start:
Quick or Full Analysis?
[Q] Quick: Auto-detect everything, use smart defaults (2 min)
[F] Full: Complete profile questionnaire + deep analysis (5 min)Detect automatically instead of asking questions:
| Detection | Files to Check | Inference |
|---|---|---|
| Tech Stack | package.json, requirements.txt, *.csproj, go.mod, Cargo.toml | Primary language/framework |
| Workflow | .github/, .gitlab-ci.yml, CODEOWNERS | Solo vs team indicators |
| Priority | Existing skill categories | Security skills = security priority |
| Platform | .agent/, .claude/, .cursor/ | AI assistant in use |
Quick Mode Weights: Relevance 35%, Uniqueness 25%, Quality 20%, Efficiency 15%, Usage 5%
Skip in Quick mode.
Gather these four data points to personalize recommendations:
Check for platform indicators in this order:
1. .agent/skills/ -> Gemini/Antigravity
2. .claude/ -> Claude Code
3. .cursor/ -> Cursor
4. .github/copilot-* -> GitHub Copilot
5. .windsurfrules -> Windsurf
6. Ask user -> Custom/UnknownExecute discovery based on detected platform:
Gemini/Antigravity: Scan .agent/skills/**/SKILL.md, check MCP server configurations, parse each SKILL.md frontmatter.
Claude Code: Scan .claude/settings.json for plugins, check CLAUDE.md files, parse plugin manifests, fetch official marketplace.json for source classification, classify each plugin as Official/Endorsed/External/Custom.
Cursor: Scan .cursor/rules/*.md and .cursorrules in project root.
Copilot/Windsurf: Scan instruction files in standard locations, parse markdown content for capability definitions.
| # | Skill Name | Source | Description | Est. Tokens |
|---|------------|--------|-------------|-------------|
| 1 | skill-name | path | description | ~500 |After discovery, run these analysis phases in order:
[Actions] x [Domains]), detect overlaps (exact duplicate, superset, partial, complementary), group by semantic categoryFor detailed methodology, scoring tables, and detection methods, see references/analysis-methodology.md.
For the standard report output template, see references/report-template.md.
Scenario: Gemini user, Python/FastAPI stack, solo developer, quality-focused
Discovery finds: 3 code review skills (similar), 2 Python skills (both relevant), 1 JavaScript testing skill (wrong stack), 1 planning skill (unique)
Result: Remove 3 skills (2 duplicate code reviewers + wrong-stack JS skill), save ~800 tokens, 43% reduction. Keep best code reviewer, both Python skills (complementary), and unique planning skill.
For detailed methodology, templates, and extended references:
references/analysis-methodology.md — Deep analysis, semantic similarity, conflict detection, weighted scoring, and dependency mapping methodologyreferences/report-template.md — Standard audit report template with backup commands, consolidation suggestions, and verification stepsreferences/marketplace-reference.md — Complete official/endorsed plugin listings, identification methods, replacement lookup, and overlap resolution rulesreferences/portfolio-templates.md — Recommended skill sets by developer role (Python Data Scientist, TypeScript Full-Stack, Power BI/DAX, DevOps, Solo Generalist)If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.