CtrlK
BlogDocsLog inGet started
Tessl Logo

azure-ai-ml-py

Azure Machine Learning SDK v2 for Python. Use for ML workspaces, jobs, models, datasets, compute, and pipelines.

68

Quality

61%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/azure-ai-ml-py/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

57%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies a clear domain (Azure ML SDK v2) and lists relevant resource categories, giving it good distinctiveness. However, it lacks concrete action verbs (e.g., 'create', 'submit', 'deploy') and could improve trigger term coverage with common abbreviations and user-facing scenarios. The 'Use for...' clause is present but functions more as a topic list than explicit trigger guidance.

Suggestions

Replace noun-only list with concrete actions: e.g., 'Create and manage ML workspaces, submit training jobs, register and deploy models, configure compute targets, and build ML pipelines.'

Add common trigger term variations such as 'AzureML', 'AML', 'training', 'deployment', 'endpoints', 'experiments', and 'azure.ai.ml'.

Expand the 'Use for' clause into a proper 'Use when' clause with user-facing scenarios: e.g., 'Use when the user asks about Azure ML SDK v2, submitting training jobs, deploying models to Azure, or building ML pipelines in Python.'

DimensionReasoningScore

Specificity

Names the domain (Azure Machine Learning SDK v2) and lists several resource types (workspaces, jobs, models, datasets, compute, pipelines), but these are more like nouns/categories than concrete actions (e.g., 'create workspace', 'submit training job', 'register model').

2 / 3

Completeness

Has a 'Use for...' clause which partially addresses 'when', but it reads more like a list of what it covers rather than explicit trigger guidance (e.g., 'Use when the user asks about Azure ML pipelines or submitting training jobs'). The 'when' is implied rather than clearly articulated with user-facing scenarios.

2 / 3

Trigger Term Quality

Includes relevant keywords like 'Azure Machine Learning', 'SDK v2', 'ML', 'workspaces', 'jobs', 'models', 'datasets', 'compute', 'pipelines' — but misses common user variations like 'AzureML', 'AML', 'training', 'deployment', 'endpoint', 'experiment', or 'ml.azure.com'.

2 / 3

Distinctiveness Conflict Risk

The description is clearly scoped to Azure Machine Learning SDK v2 for Python, which is a distinct niche. Terms like 'ML workspaces', 'pipelines', and 'compute' in the Azure ML context are unlikely to conflict with other skills.

3 / 3

Total

9

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a solid API reference skill with excellent actionability—nearly every section has executable code. However, it reads more like a cookbook/cheat sheet than a workflow guide, lacking sequencing between operations and validation steps for multi-step processes. The boilerplate sections at the end and some generic best practices dilute an otherwise efficient document.

Suggestions

Add an end-to-end workflow section showing the typical sequence (authenticate → create compute → submit job → stream logs → register model) with explicit validation checkpoints between steps.

Remove the generic 'When to Use' and 'Limitations' boilerplate sections, and trim 'Best Practices' to only non-obvious, SDK-specific guidance.

Define or explain the undefined pipeline components (prep_component, train_component) in the pipeline example, or add a note about how to create components.

Consider splitting detailed sections (pipelines, environments, datastores) into separate referenced files to improve progressive disclosure.

DimensionReasoningScore

Conciseness

The content is mostly efficient with executable code examples, but includes some unnecessary sections like 'Best Practices' with generic advice Claude already knows (e.g., 'use versioning', 'tag resources'), and the boilerplate 'When to Use' and 'Limitations' sections add no value. The operations table is useful but the overall document could be tightened.

2 / 3

Actionability

Nearly every section provides fully executable, copy-paste ready Python code with correct imports, concrete parameter values, and realistic examples. The operations summary table is a practical quick-reference. Code covers authentication, CRUD operations, jobs, pipelines, and environments comprehensively.

3 / 3

Workflow Clarity

Individual operations are clear, but multi-step workflows (e.g., create compute → submit job → monitor → register model) lack explicit sequencing and validation checkpoints. The pipeline example references undefined components (prep_component, train_component) without explanation. There are no error handling or feedback loops for operations like job submission or compute creation that can fail.

2 / 3

Progressive Disclosure

The content is well-organized with clear section headers, but it's a monolithic document (~200 lines of code examples) with no references to external files for detailed topics like pipelines, environments, or advanced configurations. Given the breadth of the SDK, topics like pipeline component definitions or environment YAML specs could be split out.

2 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
sickn33/antigravity-awesome-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.