A ready-to-use web spider that works with proxies, asynchrony, rate limit, configurable request pools, jQuery, and HTTP/2 support.
94
Build a web scraper that maintains user session state across multiple page requests by persisting cookies between requests.
You need to create a scraper that can navigate through a multi-page website requiring authentication or session management. The scraper should maintain cookies throughout the crawling session, allowing it to access pages that depend on session state.
Implement a scraper that:
createSessionScraper functioncallback function@generates
/**
* Creates a session-aware web scraper that maintains cookies across requests.
*
* @param {Object} options - Configuration options
* @param {Function} options.callback - Callback function for processing responses
* @returns {Object} Scraper instance with add() and drain event
*/
function createSessionScraper(options) {
// Implementation
}
module.exports = { createSessionScraper };Provides web scraping functionality with cookie support.
@satisfied-by
Provides cookie jar functionality for storing and managing cookies.
@satisfied-by
Install with Tessl CLI
npx tessl i tessl/npm-crawlerevals
scenario-1
scenario-2
scenario-3
scenario-4
scenario-5
scenario-6
scenario-7
scenario-8
scenario-9
scenario-10