Azure AI Text Analytics SDK for sentiment analysis, entity recognition, key phrases, language detection, PII, and healthcare NLP. Use for natural language processing on text.
Install with Tessl CLI
npx tessl i github:sickn33/antigravity-awesome-skills --skill azure-ai-textanalytics-py80
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillEvaluation — 100%
↑ 1.56xAgent success when using this skill
Validation for skill structure
Discovery
60%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description excels at listing specific capabilities but falls short on trigger guidance. The 'Use when' clause is too generic ('natural language processing on text') and doesn't help Claude distinguish when to use Azure's SDK versus other NLP tools. Adding Azure-specific triggers and more natural user language would significantly improve skill selection accuracy.
Suggestions
Strengthen the 'Use when' clause with specific triggers: 'Use when the user mentions Azure, Text Analytics, cognitive services, or needs cloud-based NLP capabilities'
Add natural language variations users might say: 'analyze sentiment', 'detect language', 'find personal information', 'extract medical terms'
Clarify the Azure-specific context in triggers to distinguish from generic NLP skills: 'Use when Azure SDK or Microsoft cognitive services are preferred or required'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'sentiment analysis, entity recognition, key phrases, language detection, PII, and healthcare NLP' - these are distinct, concrete capabilities. | 3 / 3 |
Completeness | Has a clear 'what' (Azure AI Text Analytics capabilities) but the 'when' clause is weak - 'Use for natural language processing on text' is too generic and doesn't provide explicit trigger guidance for when to select this over other NLP tools. | 2 / 3 |
Trigger Term Quality | Includes some relevant terms like 'sentiment analysis', 'entity recognition', 'PII', but misses common user variations like 'detect emotions', 'find names/places', 'extract entities', 'analyze text', or 'Azure cognitive services'. | 2 / 3 |
Distinctiveness Conflict Risk | Specifies 'Azure AI Text Analytics SDK' which helps distinguish it, but 'natural language processing on text' is generic enough to conflict with other NLP skills. The Azure-specific framing helps but isn't reinforced in the trigger clause. | 2 / 3 |
Total | 9 / 12 Passed |
Implementation
79%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-crafted SDK reference skill with excellent actionability and conciseness. All code examples are executable and the content respects Claude's intelligence. The main weaknesses are the lack of explicit error handling workflows and the monolithic structure that could benefit from splitting advanced features into separate files.
Suggestions
Add an explicit error handling pattern showing how to check `doc.is_error` and handle failures gracefully in batch operations
Consider splitting healthcare analytics and batch operations into separate reference files (e.g., HEALTHCARE.md, BATCH.md) with links from the main skill
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is lean and efficient, providing executable code examples without unnecessary explanations of what Azure AI or NLP concepts are. Every section delivers actionable information without padding. | 3 / 3 |
Actionability | All code examples are complete, executable, and copy-paste ready. Each operation (sentiment, entities, PII, etc.) has a concrete working example with realistic sample data and output handling. | 3 / 3 |
Workflow Clarity | While individual operations are clear, there's no explicit validation or error recovery workflow. The 'Handle document errors' best practice is mentioned but not demonstrated with a concrete pattern for checking and handling errors in results. | 2 / 3 |
Progressive Disclosure | Content is well-organized with clear sections and tables, but it's a long monolithic file (~200 lines). Advanced topics like healthcare analytics and batch operations could be split into separate reference files for better navigation. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.