Generate and analyze AI Bill of Materials (AIBOM) for Python projects using AI/ML components. Identifies AI models, datasets, tools, and frameworks for security and compliance tracking. Use this skill when: - User asks to scan for AI components - User wants to know what AI models a project uses - User mentions "AI BOM", "AI inventory", or "ML security" - User is working with Python AI/ML projects (PyTorch, TensorFlow, HuggingFace) - User needs AI component compliance documentation
71
63%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./command_directives/synchronous_remediation/skills/ai-inventory/SKILL.mdGenerate and analyze AI Bill of Materials (AIBOM) for Python projects to track AI models, datasets, and ML frameworks for security, compliance, and governance.
Core Principle: Know what AI components are in your software.
Note: This is an experimental feature. Currently supports Python projects only.
# Step 1: Generate AIBOM for the project
mcp_snyk_snyk_aibom(path="/absolute/path/to/project")
# Step 2: (Optional) Save AIBOM to file for documentation
mcp_snyk_snyk_aibom(
path="/absolute/path/to/project",
json_file_output="/absolute/path/to/output/aibom.json"
)
# Step 3: Verify the returned JSON contains component entries before proceeding
# Step 4: Summarize findings and flag license/risk issuesrequirements.txt, setup.py, or pyproject.tomlGoal: Ensure the project is suitable for AI BOM generation.
Check for Python project indicators: requirements.txt, setup.py, pyproject.toml, Pipfile, or .py files.
Error — Not a Python Project: If no Python indicators are found, stop and report:
- Verify path contains Python files
- Check for
requirements.txtorpyproject.toml- This feature only supports Python projects
Scan dependency files for known AI/ML packages — common examples include torch, tensorflow, keras, transformers, datasets, scikit-learn, jax, openai, langchain, mlflow, and wandb. This list is illustrative; use judgment for other AI/ML packages encountered.
If no AI components detected:
## AI Inventory Result
**Project**: /path/to/project
**Status**: No AI components detected
This project does not appear to use AI/ML frameworks. AI BOM generation is not applicable.Goal: Create comprehensive AI Bill of Materials.
Invoke the mcp_snyk_snyk_aibom tool with the absolute path to the Python project:
mcp_snyk_snyk_aibom(path="/absolute/path/to/project")Error — Network Error: If the tool cannot connect, report:
- Check internet connection and firewall (HTTPS must be allowed)
- Retry after a few minutes
Error — Experimental Feature Not Enabled: If access is denied, report:
- Contact Snyk support for experimental access
- Check organization settings and verify CLI version supports AIBOM
Before proceeding, verify the returned JSON is valid and contains at least one component entry. If the response is empty or malformed, report the error and do not continue to Phase 3.
To persist the AIBOM as a file for documentation or downstream tooling:
mcp_snyk_snyk_aibom(
path="/absolute/path/to/project",
json_file_output="/absolute/path/to/output/aibom.json"
)Goal: Understand and categorize AI components from the validated AIBOM output.
AIBOM identifies five component types: Models, Datasets, Frameworks, Tools, and Services.
Present findings using the structure below, populated with actual scan results:
## AI Component Inventory
**Project**: <project name>
**Scan Date**: <date>
**Format**: CycloneDX v1.6
### Component Summary
| Category | Count |
|-----------|-------|
| AI Models | N |
| Datasets | N |
| Frameworks| N |
| Tools | N |
| **Total** | N |
### AI Models Detected
| Model | Source | License | Risk |
|-------|--------|---------|------|
| <from scan results> | ... | ... | ... |
### Datasets Referenced
| Dataset | Source | License | PII Risk |
|---------|--------|---------|----------|
| <from scan results> | ... | ... | ... |
### Frameworks & Tools
| Component | Version | License |
|-----------|---------|---------|
| <from scan results> | ... | ... |Goal: Identify potential risks in AI components.
Flag components by risk level: Low (MIT, Apache), Medium (proprietary APIs — review terms of service), High (unknown/unclear licenses or research-only terms that may prohibit commercial use).
Flag datasets or models where data provenance or PII handling is unclear. Recommend: documenting data sources, reviewing PII handling procedures, and verifying data retention policies.
Assess model-specific risks: prompt injection (LLM-based models — mitigate with input validation), model extraction (custom/fine-tuned models — apply access controls), adversarial inputs (vision models — input validation), and bias/fairness (consider bias testing).
Goal: Create compliance-ready documentation.
## AI Compliance Report
**Project**: <project name>
**Generated**: <date>
**Standard**: EU AI Act / Internal Governance
### AI System Classification
- **Risk Level**: [High/Limited/Minimal]
- **Category**: [Classification based on use case]
### Component Inventory
[Summary from Phase 3]
### License Compliance
- All components licensed: Yes/No
- Commercial use permitted: Yes/No
- Attribution required: [list components]
### Data Governance
- Data sources documented: Yes/No
- PII handling reviewed: Yes/No
- Consent verified: Yes/No
### Model Governance
- Model cards available: Yes/No
- Bias testing completed: Yes/No
- Performance benchmarks: Yes/No
### Approval Status
- [ ] Technical review
- [ ] Legal review
- [ ] Ethics review
- [ ] Deployment approved9293725
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.