Implement web page content extraction capabilities using the z-ai-web-dev-sdk. Use this skill when the user needs to scrape web pages, extract article content, retrieve page metadata, or build applications that process web content. Supports automatic content extraction with title, HTML, and publication time retrieval.
67
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
77%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-structured description that clearly explains both capabilities and usage triggers. The explicit 'Use this skill when...' clause with multiple trigger scenarios is a strength. However, it could benefit from additional natural trigger terms users might say and slightly more distinctive language to avoid overlap with general web development skills.
Suggestions
Add more natural trigger term variations like 'web scraping', 'parse HTML', 'crawl website', 'get content from URL', or 'fetch webpage'
Clarify what distinguishes this from general HTTP/web development skills - perhaps emphasize the automatic content extraction vs raw HTML fetching
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'scrape web pages', 'extract article content', 'retrieve page metadata', 'build applications that process web content', and specifies data types retrieved: 'title, HTML, and publication time'. | 3 / 3 |
Completeness | Clearly answers both what ('Implement web page content extraction capabilities') and when ('Use this skill when the user needs to scrape web pages, extract article content, retrieve page metadata, or build applications that process web content'). | 3 / 3 |
Trigger Term Quality | Includes some natural keywords like 'scrape web pages', 'extract article content', 'page metadata', but missing common variations users might say like 'web scraping', 'parse HTML', 'crawl website', 'get content from URL'. | 2 / 3 |
Distinctiveness Conflict Risk | Reasonably specific to web content extraction, but could overlap with general web development skills or HTTP request skills. The SDK name 'z-ai-web-dev-sdk' helps distinguish it, but 'web content' is somewhat broad. | 2 / 3 |
Total | 10 / 12 Passed |
Implementation
42%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides excellent actionable code examples that are immediately executable, but suffers from severe verbosity and poor organization. The content repeats similar patterns (caching, rate limiting, error handling) multiple times across different examples instead of extracting them into reusable references. A 100-line skill with links to advanced examples would be far more effective.
Suggestions
Reduce to a concise quick-start (CLI + basic SDK usage in ~50 lines) and move advanced patterns (WebContentAnalyzer, FeedReader, ScrapingPipeline, etc.) to separate reference files like ADVANCED_PATTERNS.md
Remove redundant implementations - show caching, rate limiting, and error handling once in a 'Best Practices' reference file rather than repeating in every example
Add explicit validation checkpoints to workflows, e.g., 'Verify response.code === 200 before processing' as numbered steps rather than buried in code
Cut explanatory text that Claude already knows (what caching is, why rate limiting matters, basic URL validation concepts)
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~800+ lines with massive redundancy. Multiple implementations of the same caching, rate limiting, and error handling patterns repeated throughout. The CLI section alone could be reduced by 70%. Explains basic concepts Claude knows (what caching is, how to validate URLs, basic JavaScript patterns). | 1 / 3 |
Actionability | Provides fully executable, copy-paste ready code examples throughout. All code is complete JavaScript/TypeScript with proper imports, error handling, and usage examples. CLI commands are specific and immediately usable. | 3 / 3 |
Workflow Clarity | The 'How It Works' section provides a basic sequence, but there are no explicit validation checkpoints for the multi-step scraping pipelines. Error handling is shown but not integrated into clear validate-fix-retry workflows. The troubleshooting section is helpful but disconnected from the main workflows. | 2 / 3 |
Progressive Disclosure | Monolithic wall of text with everything inline. References a scripts directory but doesn't link to separate files for advanced patterns. The 'Advanced Use Cases' section contains 200+ lines that should be in separate reference files. No clear hierarchy between quick-start and deep-dive content. | 1 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (1141 lines); consider splitting into references/ and linking | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.