Azure AI Text Analytics SDK for sentiment analysis, entity recognition, key phrases, language detection, PII, and healthcare NLP. Use for natural language processing on text.
69
62%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/azure-ai-textanalytics-py/SKILL.mdQuality
Discovery
60%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description excels at listing specific capabilities (sentiment analysis, entity recognition, PII, etc.) and correctly names the Azure AI Text Analytics SDK. However, the 'Use when' clause is too generic ('natural language processing on text'), which weakens both completeness and distinctiveness. Adding more specific trigger scenarios and user-facing language would significantly improve skill selection accuracy.
Suggestions
Replace the vague 'Use for natural language processing on text' with specific trigger scenarios, e.g., 'Use when the user needs to analyze sentiment, detect PII, extract key phrases, recognize entities, or detect language in text using Azure AI services.'
Add common user-facing variations and file/context triggers such as 'customer reviews', 'medical text', 'text classification', 'Azure Cognitive Services', or 'Text Analytics API' to improve trigger term coverage.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: sentiment analysis, entity recognition, key phrases, language detection, PII detection, and healthcare NLP. These are clearly defined capabilities. | 3 / 3 |
Completeness | The 'what' is well-covered with specific capabilities, but the 'when' clause ('Use for natural language processing on text') is too vague and generic to serve as an effective trigger. It doesn't specify scenarios like 'when the user asks to analyze sentiment of reviews' or 'when detecting PII in documents'. | 2 / 3 |
Trigger Term Quality | Includes good technical terms like 'sentiment analysis', 'entity recognition', 'PII', and 'healthcare NLP', but misses common user variations like 'detect sentiment', 'extract entities', 'text mining', 'Azure Cognitive Services', or 'Text Analytics API'. The 'Use for natural language processing on text' trigger is overly broad. | 2 / 3 |
Distinctiveness Conflict Risk | The mention of 'Azure AI Text Analytics SDK' anchors it to a specific platform, which helps distinctiveness. However, the broad 'natural language processing on text' trigger could overlap with other NLP-related skills (e.g., spaCy, Hugging Face, or general text processing skills). | 2 / 3 |
Total | 9 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid API reference skill with excellent actionability — every operation has complete, executable code examples. However, it reads more like comprehensive documentation than a lean skill file; the content could be tightened by moving detailed per-operation examples to a reference file and keeping only quick-start patterns inline. Error handling workflows and validation checkpoints are minimal, which is a gap for batch/long-running operations.
Suggestions
Split detailed per-operation examples into a separate REFERENCE.md and keep SKILL.md as a concise overview with one quick-start example and links to detailed sections.
Add explicit error handling and validation workflow for batch operations (begin_analyze_actions) and long-running operations (healthcare), including polling status checks and partial failure handling.
Remove the generic 'When to Use' and 'Limitations' boilerplate sections that don't add SDK-specific value.
Remove or consolidate the Client Types table since it only lists one client in two variants — this information is already conveyed in the Async Client section.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is mostly efficient with executable examples, but includes some unnecessary elements like the boilerplate 'When to Use' and 'Limitations' sections that add no value, and the Client Types table is redundant (only one client). Some examples could be more compact. | 2 / 3 |
Actionability | Every section provides fully executable, copy-paste ready Python code with concrete examples. Authentication, each API operation, batch processing, and async usage all have complete, runnable code snippets with realistic sample data. | 3 / 3 |
Workflow Clarity | Individual operations are clear, but there's no validation/error-handling workflow beyond checking `doc.is_error`. The batch operations section lacks guidance on handling partial failures or polling status, and there's no explicit workflow for setting up credentials or verifying connectivity. | 2 / 3 |
Progressive Disclosure | The content is well-structured with clear section headers and a useful reference table, but it's a long monolithic file (~200 lines of examples) that could benefit from splitting detailed examples into a separate reference file while keeping the SKILL.md as a concise overview with links. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
76cbde3
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.