Azure AI Content Understanding SDK for Python. Use for multimodal content extraction from documents, images, audio, and video.
Install with Tessl CLI
npx tessl i github:boisenoise/skills-collections --skill azure-ai-contentunderstanding-py71
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
50%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies the SDK and general capability but lacks the specificity and explicit trigger guidance needed for reliable skill selection. It covers multiple modalities which is good for scope, but the generic phrasing could cause conflicts with more specialized skills. Adding concrete actions and natural user trigger terms would significantly improve selection accuracy.
Suggestions
Add specific concrete actions for each modality (e.g., 'extract text from scanned documents, transcribe audio files, analyze image content, extract frames from video')
Include natural trigger terms users would say: 'OCR', 'transcription', 'image recognition', file extensions like '.pdf', '.mp4', '.wav', '.png'
Expand the 'Use for' clause to 'Use when...' with explicit scenarios like 'when user needs to extract content from media files using Azure AI services'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Azure AI Content Understanding SDK) and a general action (multimodal content extraction), but doesn't list specific concrete actions like 'extract text', 'transcribe audio', 'analyze images', or 'process video frames'. | 2 / 3 |
Completeness | Has a 'Use for...' clause which addresses when to use it, but the guidance is fairly generic. The 'what' is weak (just 'multimodal content extraction') and the 'when' doesn't provide explicit trigger scenarios or user phrases. | 2 / 3 |
Trigger Term Quality | Includes some relevant keywords (documents, images, audio, video, content extraction) but misses common variations users might say like 'OCR', 'transcription', 'image analysis', 'video processing', or file extensions like '.mp4', '.wav', '.png'. | 2 / 3 |
Distinctiveness Conflict Risk | The Azure AI SDK reference provides some distinctiveness, but 'content extraction from documents, images, audio, and video' is broad enough to potentially conflict with other document processing, image analysis, or media handling skills. | 2 / 3 |
Total | 8 / 12 Passed |
Implementation
79%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-crafted SDK reference skill with excellent actionability and conciseness. The code examples are complete and immediately usable. However, it lacks error handling guidance and validation checkpoints for long-running operations, and the content could benefit from progressive disclosure to separate quick-start from advanced usage.
Suggestions
Add error handling examples for polling failures and common API errors (e.g., invalid URLs, unsupported formats)
Include a validation checkpoint pattern: check poller.status() before calling result(), handle OperationFailed states
Consider splitting advanced topics (custom analyzers, async patterns) into separate referenced files to improve progressive disclosure
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is lean and efficient, providing direct code examples without explaining basic concepts Claude already knows. Every section serves a clear purpose with minimal redundancy. | 3 / 3 |
Actionability | All code examples are complete, executable, and copy-paste ready with proper imports, environment variable handling, and realistic usage patterns. The examples cover the full workflow from authentication to result processing. | 3 / 3 |
Workflow Clarity | The 'Core Workflow' section outlines the 3-step async pattern clearly, but lacks explicit validation/error handling checkpoints. For long-running operations that can fail, there's no guidance on handling polling failures or validating results. | 2 / 3 |
Progressive Disclosure | Content is well-organized with clear sections and tables, but it's a monolithic file with no references to external documentation for advanced topics like custom analyzer schemas or error handling. The 200+ lines could benefit from splitting detailed examples into separate files. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.