Generate and analyze AI Bill of Materials (AIBOM) for Python projects using AI/ML components. Identifies AI models, datasets, tools, and frameworks for security and compliance tracking. Use this skill when: - User asks to scan for AI components - User wants to know what AI models a project uses - User mentions "AI BOM", "AI inventory", or "ML security" - User is working with Python AI/ML projects (PyTorch, TensorFlow, HuggingFace) - User needs AI component compliance documentation
71
63%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./command_directives/synchronous_remediation/skills/ai-inventory/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-crafted skill description that clearly communicates specific capabilities, includes rich trigger terms users would naturally use, and explicitly defines when the skill should be selected. The description occupies a distinct niche (AIBOM for Python AI/ML projects) that minimizes conflict risk with other skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple concrete actions: 'Generate and analyze AI Bill of Materials', 'Identifies AI models, datasets, tools, and frameworks', and specifies the purpose 'security and compliance tracking'. These are specific, actionable capabilities. | 3 / 3 |
Completeness | Clearly answers both 'what' (generate/analyze AIBOMs, identify AI models/datasets/tools/frameworks) and 'when' with an explicit 'Use this skill when:' clause listing five specific trigger scenarios. | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural terms users would say: 'AI BOM', 'AI inventory', 'ML security', 'AI components', 'AI models', 'Python AI/ML projects', and specific framework names like 'PyTorch, TensorFlow, HuggingFace'. These are terms users would naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive niche — AI Bill of Materials is a very specific domain. The combination of AIBOM generation, Python AI/ML focus, and compliance tracking makes it unlikely to conflict with general code analysis or documentation skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
27%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill wraps a single MCP tool call (`mcp_snyk_snyk_aibom`) in an excessive amount of general AI governance knowledge that Claude already possesses. The core actionable content (tool invocation with path and optional output file) could be conveyed in ~20 lines. The report templates, risk assessment frameworks, and compliance checklists add significant token cost without proportional value, and should either be drastically condensed or moved to separate reference files.
Suggestions
Reduce the skill to Quick Start + tool parameters + error handling + a brief note on output format. Move report templates, risk assessment guidance, and compliance checklists to separate reference files or remove them entirely since Claude can generate these from general knowledge.
Remove explanations of concepts Claude already knows: license risk levels, PII concerns, prompt injection, model extraction, bias testing, and EU AI Act categories are all general knowledge that waste tokens.
Split the compliance report template and risk assessment framework into separate referenced files (e.g., COMPLIANCE_TEMPLATE.md, RISK_ASSESSMENT.md) to improve progressive disclosure.
Make validation steps concrete — e.g., specify what JSON keys to check for in the AIBOM output rather than just saying 'verify the returned JSON is valid.'
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is excessively verbose at ~200+ lines. It explains concepts Claude already knows (what AI models are, what license risk levels mean, what PII is), includes lengthy report templates that could be summarized in a few lines, and provides extensive risk assessment guidance (prompt injection, model extraction, bias) that is general knowledge rather than tool-specific instruction. The compliance report template alone is ~30 lines of boilerplate. | 1 / 3 |
Actionability | The core action is clear — call `mcp_snyk_snyk_aibom` with a path — and the tool invocation syntax is concrete and executable. However, much of the surrounding content (risk assessment, compliance documentation, report templates) is descriptive guidance rather than executable steps, and the actual tool has only two parameters, making most of the skill's bulk non-actionable filler. | 2 / 3 |
Workflow Clarity | The 5-phase workflow is clearly sequenced with numbered steps, and there are some validation checkpoints (Step 2.2 validates output before proceeding). However, the validation steps are vague ('verify the returned JSON is valid') without concrete checks, and error recovery is limited to generic advice like 'retry after a few minutes.' The workflow is over-engineered for what is essentially a single tool call followed by analysis. | 2 / 3 |
Progressive Disclosure | Everything is in a single monolithic file with no references to external documents. The report templates, risk assessment frameworks, compliance checklists, and use cases could all be split into separate reference files. The content reads as a wall of text that forces Claude to process ~200 lines when the core skill is a single tool call. | 1 / 3 |
Total | 6 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
9293725
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.