Audit and improve skill collections with an 8-dimension scoring framework, duplication detection, remediation planning, and CI quality gates; use when evaluating skill quality, generating remediation plans, validating report format, or enforcing repository-wide skill artifact conventions.
Does it follow best practices?
Evaluation — 93%
↑ 1.33xAgent success when using this tile
Validation for skill structure
Supplementary validation checks for skills intended for Tessl registry submission. These checks extend the core 8-dimension framework with agent-agnostic and performance-focused evaluations.
Use after: Core 8-dimension evaluation (≥108 points required) Target: 100% tessl compliance for registry acceptance
Tessl focuses on performance-evaluated, agent-agnostic skills that provide measurable effectiveness improvements. This framework adds 3 supplementary validation areas:
Purpose: Ensure skills work across different AI assistant platforms without agent-specific dependencies.
Check allowed-tools frontmatter for agent-specific tools:
# Bad - Claude Code specific
allowed-tools: [claude-artifact, claude-codebase]
# Good - Universal tools
allowed-tools: [bash, edit, read, write]Auto-check pattern: Flag tools containing agent names (claude-, cursor-, openai-, etc.)
Scan content for agent-specific references:
❌ BAD: "Tell Claude to run the command"
❌ BAD: "Use Cursor's autocomplete feature"
❌ BAD: "In OpenAI's interface, click..."
✅ GOOD: "Run the command using your bash tool"
✅ GOOD: "Use your code completion capabilities"
✅ GOOD: "Execute the following workflow"Auto-check pattern: /\b(claude|cursor|openai|copilot|gemini|chatgpt)\b/i
Avoid assuming specific agent capabilities:
❌ BAD: "Since you can't execute code directly..."
❌ BAD: "Use your web browsing to..."
❌ BAD: "Your image generation will..."
✅ GOOD: "If code execution tools are available..."
✅ GOOD: "When web access is supported..."
✅ GOOD: "For agents with image capabilities..."Purpose: Ensure skills define measurable effectiveness improvements that can be evaluated.
Skills must include quantifiable outcomes:
## Success Metrics
This skill provides:
- ✅ 85% reduction in configuration errors
- ✅ 3x faster setup time (5 minutes vs 15 minutes)
- ✅ 100% compliance with security standardsShow clear improvement examples:
## Effectiveness Examples
### Before Using This Skill
- Manual setup takes 30+ commands
- 40% failure rate on first attempt
- Inconsistent configuration across environments
### After Using This Skill
- One-command deployment
- <5% failure rate
- Standardized, reproducible environmentsDefine what "effective use" looks like:
## Expected Outcomes
When applied correctly, this skill delivers:
- Time savings: 60-90% reduction in task duration
- Quality improvement: 95%+ adherence to best practices
- Error reduction: <10% incident rate vs 30% baselinePurpose: Validate that skill instructions work across different development environments and agent platforms.
Verify all referenced tools are widely supported:
✅ GOOD: bash, read, write, edit, glob, grep
✅ GOOD: Standard CLI tools (git, npm, docker)
✅ GOOD: Common development commands
❌ BAD: Agent-specific tools
❌ BAD: Proprietary extensions
❌ BAD: Platform-locked featuresEnsure shell commands work across operating systems:
# Bad - macOS specific
brew install package
# Good - Cross-platform with options
# Install using your package manager:
# - macOS: brew install package
# - Ubuntu: apt install package
# - Windows: choco install packageUse portable path conventions:
❌ BAD: /usr/local/bin/tool (Unix-specific)
❌ BAD: C:\Program Files\tool (Windows-specific)
✅ GOOD: Add tool to your PATH
✅ GOOD: $(which tool) or equivalentAvoid assuming specific agent capabilities:
❌ BAD: "Use your built-in web scraping"
❌ BAD: "Generate an image with DALL-E"
❌ BAD: "Create a diagram with your drawing tools"
✅ GOOD: "If web scraping tools are available..."
✅ GOOD: "Using image generation capabilities..."
✅ GOOD: "With diagram creation tools..."When preparing skills for Tessl submission:
Integration with existing skill-quality-auditor workflow:
# Standard evaluation first
sh skills/skill-quality-auditor/scripts/evaluate.sh <skill-name> --json
# Then apply tessl compliance checks
sh skills/skill-quality-auditor/scripts/tessl-compliance-check.sh <skill-name>Agent-Agnostic Check:
# Check for agent-specific terms
grep -ri "claude\|cursor\|openai\|copilot\|gemini" skills/<skill>/Tool Compatibility Check:
# Extract and validate allowed-tools
yq '.allowed-tools[]?' skills/<skill>/SKILL.md | grep -E "(claude|cursor|openai)-"Performance Metrics Check:
# Look for quantified outcomes
grep -E "[0-9]+(%|x|times|\s(seconds|minutes|hours)|reduction|improvement)" skills/<skill>/This framework supplements, not replaces, the 8-dimension evaluation:
| Check Type | When to Apply | Pass Criteria |
|---|---|---|
| Core 8-Dimension | Always | ≥108 points (A-grade) |
| Agent-Agnostic | Tessl submission | No agent-specific deps |
| Performance Metrics | Tessl submission | Quantified effectiveness |
| Cross-Platform | Tessl submission | Universal compatibility |
framework-skill-judge-dimensions.md - Core 8-dimension frameworkframework-quality-standards.md - A-grade requirementsInstall with Tessl CLI
npx tessl i pantheon-ai/skill-quality-auditor@0.1.4evals
scenario-1
scenario-2
scenario-3
scenario-4
scenario-5
references
scripts