Azure Machine Learning SDK v2 for Python. Use for ML workspaces, jobs, models, datasets, compute, and pipelines.
Install with Tessl CLI
npx tessl i github:sickn33/antigravity-awesome-skills --skill azure-ai-ml-py79
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillEvaluation — 100%
↑ 1.06xAgent success when using this skill
Validation for skill structure
Discovery
57%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description establishes a clear domain (Azure ML SDK v2 for Python) and lists relevant capability areas, making it distinctive. However, it lacks concrete action verbs and relies on category nouns rather than specific operations. The trigger guidance exists but could be strengthened with more natural user language variations.
Suggestions
Replace category nouns with concrete actions: 'Create and manage ML workspaces, submit training jobs, register and deploy models, manage datasets and compute resources, build ML pipelines'
Expand trigger terms to include common variations: 'Azure ML', 'AzureML', 'machine learning on Azure', 'model training', 'model deployment', 'MLOps'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Azure ML SDK v2) and lists several areas (workspaces, jobs, models, datasets, compute, pipelines), but these are categories rather than concrete actions like 'create', 'deploy', or 'train'. | 2 / 3 |
Completeness | Has a 'Use for...' clause that partially addresses when to use it, but it lists categories rather than explicit trigger scenarios. The 'when' guidance is present but weak—it doesn't specify user-facing triggers like 'when user asks about training models on Azure'. | 2 / 3 |
Trigger Term Quality | Includes relevant keywords like 'ML', 'workspaces', 'jobs', 'models', 'datasets', 'compute', 'pipelines', but misses common variations users might say like 'Azure ML', 'AzureML', 'machine learning', 'training', 'deployment', or 'MLOps'. | 2 / 3 |
Distinctiveness Conflict Risk | Clearly scoped to Azure Machine Learning SDK v2 specifically, which is a distinct niche. The combination of 'Azure', 'ML SDK v2', and 'Python' makes it unlikely to conflict with generic ML skills or other cloud provider skills. | 3 / 3 |
Total | 9 / 12 Passed |
Implementation
79%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a strong reference skill with excellent actionability and conciseness - every code example is executable and the content respects Claude's intelligence. The main weaknesses are the lack of validation checkpoints for multi-step operations (like job submission -> monitoring -> model registration) and the monolithic structure that could benefit from splitting advanced content into separate files.
Suggestions
Add explicit validation/verification steps for multi-step workflows, especially around job submission (e.g., check job status before streaming, verify job succeeded before registering model)
Consider splitting the operations reference table and best practices into a separate REFERENCE.md file to improve progressive disclosure
Add a feedback loop example for job failures (e.g., how to check job.status, handle failures, and retry)
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is lean and efficient, providing executable code examples without unnecessary explanations. It assumes Claude knows what Azure ML is and jumps straight to actionable patterns. | 3 / 3 |
Actionability | Every section provides complete, copy-paste ready Python code with proper imports. Examples cover the full range of common operations with concrete parameter values. | 3 / 3 |
Workflow Clarity | Individual operations are clear, but multi-step workflows lack explicit validation checkpoints. The pipeline section shows steps but doesn't include error handling or verification between job submission and completion. | 2 / 3 |
Progressive Disclosure | Content is well-organized with clear sections, but the file is quite long (~200 lines) with no references to external files for advanced topics. The operations table and best practices could be separate reference files. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.