A ready-to-use web spider that works with proxies, asynchrony, rate limit, configurable request pools, jQuery, and HTTP/2 support.
94
Build a resilient web scraper that can handle unreliable endpoints with automatic retry capabilities.
You are building a monitoring system that periodically checks the status of multiple web services. These services can be unreliable and may fail intermittently due to network issues, server overload, or temporary outages. Your scraper needs to automatically retry failed requests with appropriate delays to maximize successful data collection.
Implement a web scraper that:
Create a test file that demonstrates the retry functionality:
// Given a reliable endpoint that responds successfully
// When the scraper makes a request
// Then the request succeeds on the first attempt with no retries// Given an endpoint that fails initially but succeeds on retry
// When the scraper makes a request with retry enabled
// Then the request eventually succeeds after one or more retries// Given an endpoint that consistently fails
// When the scraper makes a request with limited retries
// Then all retry attempts are exhausted and the request is marked as failedWhen running the scraper:
scraper.js or scraper.tsscraper.test.js or scraper.test.tsA ready-to-use web spider with automatic retry mechanisms and failure handling capabilities.
Install with Tessl CLI
npx tessl i tessl/npm-crawlerevals
scenario-1
scenario-2
scenario-3
scenario-4
scenario-5
scenario-6
scenario-7
scenario-8
scenario-9
scenario-10