CtrlK
BlogDocsLog inGet started
Tessl Logo

azure-ai-document-intelligence-ts

Extract text, tables, and structured data from documents using prebuilt and custom models.

69

Quality

62%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/antigravity-azure-ai-document-intelligence-ts/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

60%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description does a good job listing specific extraction capabilities (text, tables, structured data) and hints at the method (prebuilt and custom models). However, it lacks an explicit 'Use when...' clause, misses natural trigger terms users would say (like specific file types or use cases such as 'invoice', 'OCR', 'PDF'), and the generic term 'documents' reduces its distinctiveness from other document-processing skills.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user needs to extract information from documents, invoices, receipts, or forms using AI models.'

Include natural trigger terms and file type keywords users would say, such as 'PDF', 'invoice', 'receipt', 'OCR', 'form recognition', '.pdf', '.png', '.jpg'.

Specify the technology or platform (e.g., Azure Document Intelligence, AWS Textract) to improve distinctiveness and reduce conflict with generic document extraction skills.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'Extract text, tables, and structured data' and mentions both 'prebuilt and custom models' as methods. This provides clear, actionable capabilities.

3 / 3

Completeness

Clearly answers 'what does this do' (extract text, tables, structured data from documents using models), but lacks an explicit 'Use when...' clause or trigger guidance for when Claude should select this skill.

2 / 3

Trigger Term Quality

Includes some relevant terms like 'extract text', 'tables', 'structured data', and 'documents', but lacks specific file type keywords users would naturally say (e.g., PDF, invoice, receipt, OCR, .pdf, .docx) and misses common variations.

2 / 3

Distinctiveness Conflict Risk

'Documents' is very broad and could overlap with many document-related skills. The mention of 'prebuilt and custom models' adds some distinctiveness but doesn't clearly carve out a niche—it could conflict with OCR, PDF extraction, or general document processing skills.

2 / 3

Total

9

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a solid API reference skill with excellent actionability — every section provides executable, properly-typed TypeScript code. However, it suffers from repetitive patterns (the polling + error check boilerplate appears in nearly every section), a monolithic structure that could benefit from splitting advanced topics into separate files, and lacks validation/verification steps for operations like custom model building. The 'When to Use' section adds no value.

Suggestions

Consolidate the repeated polling + isUnexpected pattern into one canonical example, then reference it from other sections instead of repeating the full boilerplate each time.

Add validation/verification steps for custom model building (e.g., checking model status, evaluating accuracy metrics, testing with sample documents before production use).

Remove the empty 'When to Use' section or replace it with genuinely useful guidance about when to choose this SDK vs alternatives.

Consider splitting prebuilt model field extraction examples and custom model/classifier content into separate referenced files to improve progressive disclosure.

DimensionReasoningScore

Conciseness

The skill is mostly efficient with executable code examples, but there's significant repetition in the polling pattern (shown in nearly every example, then again as a dedicated section). The invoice and receipt extraction sections are very similar and could be consolidated. The 'When to Use' section at the end is vacuous.

2 / 3

Actionability

All code examples are fully executable TypeScript with correct imports, proper type annotations, and real API patterns. Examples cover URL and local file analysis, prebuilt models, custom models, classifiers, and pagination — all copy-paste ready.

3 / 3

Workflow Clarity

The polling pattern section clearly sequences the async workflow (start → check errors → create poller → monitor → wait), but there are no validation checkpoints for destructive/batch operations like building custom models. No guidance on verifying model build quality or handling partial failures.

2 / 3

Progressive Disclosure

The content is well-structured with clear section headers and a useful prebuilt models reference table, but it's a long monolithic file (~200+ lines of code examples) with no references to external files for advanced topics like custom model training details or field schema references.

2 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
boisenoise/skills-collections
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.