A ready-to-use web spider that works with proxies, asynchrony, rate limit, configurable request pools, jQuery, and HTTP/2 support.
94
A web resource aggregation tool that fetches content from multiple URLs and intelligently processes the responses based on their Content-Type.
@generates
/**
* Fetches and processes web resources from the given URLs.
* Automatically handles different content types based on Content-Type headers.
*
* @param {string[]} urls - Array of URLs to fetch
* @param {function} callback - Called when all resources are fetched
* Receives (error, results) where results is an array of objects:
* { url: string, contentType: string, data: any }
* - For HTML: data contains { title: string, body: string }
* - For JSON: data contains the parsed JavaScript object/array
* - For binary: data contains { buffer: Buffer, size: number }
*/
function fetchResources(urls, callback) {
// IMPLEMENTATION HERE
}
module.exports = {
fetchResources
};Provides web scraping and HTTP request capabilities with automatic content-type based response processing.
Install with Tessl CLI
npx tessl i tessl/npm-crawlerevals
scenario-1
scenario-2
scenario-3
scenario-4
scenario-5
scenario-6
scenario-7
scenario-8
scenario-9
scenario-10