CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/npm-crawler

A ready-to-use web spider that works with proxies, asynchrony, rate limit, configurable request pools, jQuery, and HTTP/2 support.

94

1.17x
Overview
Eval results
Files

task.mdevals/scenario-8/

Production Web Scraper with Silent Logging

Build a web scraping utility that collects product titles from multiple URLs while operating silently without console output.

Requirements

Create a web scraping utility that:

  1. Accepts a list of URLs to scrape
  2. Extracts the page title from each URL
  3. Operates silently without any console logging or warnings
  4. Returns collected data as an array of objects

The scraper must be configured for production use where silent operation is required.

Implementation

@generates

Implement a function scrapeProducts(urls) that:

  • Takes an array of URLs as a parameter
  • Returns a Promise that resolves to an array of scraped data
  • Each result should have: url and title properties
  • Configures the crawler to suppress all log messages

API

/**
 * Scrapes page titles from multiple URLs with silent logging
 *
 * @param {string[]} urls - Array of URLs to scrape
 * @returns {Promise<Array<{url: string, title: string}>>} Array of scraped page data
 */
async function scrapeProducts(urls) {
  // Implementation here
}

module.exports = { scrapeProducts };

Test Cases

  • Given an array of URLs, the scraper extracts page titles without generating any console output @test
  • Given multiple URLs, all results are returned with url and title properties @test
  • The scraper completes execution without emitting any log messages to the console @test

Dependencies { .dependencies }

crawler { .dependency }

A ready-to-use web spider that works with proxies, asynchrony, rate limit, configurable request pools, jQuery, and HTTP/2 support.

Install with Tessl CLI

npx tessl i tessl/npm-crawler

tile.json