A ready-to-use web spider that works with proxies, asynchrony, rate limit, configurable request pools, jQuery, and HTTP/2 support.
94
{
"context": "This evaluation assesses how well the engineer uses the crawler package's silence mode configuration to suppress logging output in a production web scraping implementation.",
"type": "weighted_checklist",
"checklist": [
{
"name": "Silence configuration",
"description": "Sets the 'silence' configuration option to true when creating or configuring the Crawler instance to suppress all warnings and error logs",
"max_score": 40
},
{
"name": "Crawler instantiation",
"description": "Properly creates a new Crawler instance using 'new Crawler()' or 'const crawler = new Crawler()' with a configuration object",
"max_score": 20
},
{
"name": "Queue management",
"description": "Uses crawler.add() or crawler.queue() to add URLs to the crawling queue",
"max_score": 15
},
{
"name": "Callback implementation",
"description": "Implements the callback function with proper signature (error, response, done) and calls done() to release the queue slot",
"max_score": 15
},
{
"name": "Data extraction",
"description": "Uses response.$ (Cheerio) to extract page titles from the HTML",
"max_score": 10
}
]
}Install with Tessl CLI
npx tessl i tessl/npm-crawlerevals
scenario-1
scenario-2
scenario-3
scenario-4
scenario-5
scenario-6
scenario-7
scenario-8
scenario-9
scenario-10