A ready-to-use web spider that works with proxies, asynchrony, rate limit, configurable request pools, jQuery, and HTTP/2 support.
94
Build a web scraper that collects data from a secure API endpoint with strict TLS requirements and custom redirect handling.
You need to scrape data from an HTTPS API that has the following characteristics:
Custom TLS Configuration: The API uses a self-signed certificate, so you need to configure the scraper to accept it while still validating the connection properly.
Redirect Handling: The API returns a 302 redirect to the actual data endpoint. Your scraper should follow redirects automatically but limit the maximum number of redirects to 2.
Response Compression: The API returns compressed responses. Your scraper should automatically decompress the response data.
Custom Response Format: The API returns JSON data, and you need to parse it automatically without manually calling JSON.parse().
Create a module that exports a function scrapeSecureAPI(url, callback) that:
(error, data) where data is the parsed response bodyThe scraper should:
@generates
/**
* Scrapes data from a secure API endpoint with custom TLS and redirect handling.
*
* @param {string} url - The URL to scrape
* @param {Function} callback - Callback function called with (error, data)
*/
function scrapeSecureAPI(url, callback) {
// Implementation here
}
module.exports = { scrapeSecureAPI };Provides web crawling and scraping functionality with advanced HTTP control.
@satisfied-by
Install with Tessl CLI
npx tessl i tessl/npm-crawlerevals
scenario-1
scenario-2
scenario-3
scenario-4
scenario-5
scenario-6
scenario-7
scenario-8
scenario-9
scenario-10