Generate and analyze AI Bill of Materials (AIBOM) for Python projects using AI/ML components. Identifies AI models, datasets, tools, and frameworks for security and compliance tracking. Use this skill when: - User asks to scan for AI components - User wants to know what AI models a project uses - User mentions "AI BOM", "AI inventory", or "ML security" - User is working with Python AI/ML projects (PyTorch, TensorFlow, HuggingFace) - User needs AI component compliance documentation
76
70%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./command_directives/synchronous_remediation/skills/ai-inventory/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-crafted skill description that clearly communicates specific capabilities, includes rich trigger terms covering natural user language, and explicitly defines when the skill should be selected. The description occupies a distinct niche (AIBOM for Python AI/ML projects) that minimizes conflict risk with other skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple concrete actions: 'Generate and analyze AI Bill of Materials', 'Identifies AI models, datasets, tools, and frameworks', and specifies the purpose 'security and compliance tracking'. These are specific, actionable capabilities. | 3 / 3 |
Completeness | Clearly answers both 'what' (generate/analyze AIBOMs, identify AI models/datasets/tools/frameworks) and 'when' with an explicit 'Use this skill when:' clause listing five specific trigger scenarios. | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural terms users would say: 'AI BOM', 'AI inventory', 'ML security', 'AI components', 'AI models', 'Python AI/ML projects', and specific framework names like 'PyTorch, TensorFlow, HuggingFace'. These are terms users would naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive niche — AI Bill of Materials is a very specific domain. The combination of AIBOM generation, Python AI/ML focus, and compliance tracking makes it unlikely to conflict with general code analysis or documentation skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
39%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill has a well-structured multi-phase workflow with good validation checkpoints, but is severely bloated. The core skill (call one MCP tool, validate output, summarize) is buried under extensive report templates, risk assessment frameworks, and compliance documentation that should either be in separate files or dramatically condensed. Much of the content explains concepts Claude already understands (license risk levels, PII concerns, common AI/ML packages).
Suggestions
Reduce the skill to ~50-70 lines by moving the compliance report template, risk assessment criteria, and use cases into separate referenced files (e.g., COMPLIANCE_TEMPLATE.md, RISK_ASSESSMENT.md).
Remove explanations of concepts Claude already knows — e.g., what license risk levels mean, what PII is, what prompt injection is — and replace with terse checklists or tables.
Show how to extract specific fields from the AIBOM JSON response (e.g., `result['components']`) rather than providing generic fill-in-the-blank templates, to make the analysis phases more actionable.
Cut the illustrative AI/ML package list in Step 1.2 — Claude already knows which packages are AI/ML related.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is excessively verbose at ~200+ lines. It explains concepts Claude already knows (what AI models are, what license risk levels mean, what PII is), includes lengthy report templates that could be summarized in a few lines, and provides extensive error handling guidance that is largely common sense. The compliance report template and risk assessment sections add significant bulk without proportional value. | 1 / 3 |
Actionability | The core action is clear — call `mcp_snyk_snyk_aibom` with a path — and the tool invocation syntax is concrete. However, much of the content (risk assessment, compliance reporting, model security analysis) is descriptive guidance rather than executable steps, and the report templates are fill-in-the-blank rather than showing how to extract specific fields from the AIBOM JSON output. | 2 / 3 |
Workflow Clarity | The five-phase workflow is clearly sequenced with explicit validation checkpoints (verify Python project → check AI indicators → generate AIBOM → validate output before proceeding → analyze → assess → document). Error handling is specified at each phase with clear stop conditions, and Step 2.2 explicitly gates progression to Phase 3. | 3 / 3 |
Progressive Disclosure | This is a monolithic wall of text with no references to external files. The compliance report templates, risk assessment frameworks, and use case descriptions could easily be split into separate reference documents. Everything is inline, making the skill unnecessarily long for the primary task of generating an AIBOM. | 1 / 3 |
Total | 7 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
adb5a9a
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.