CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/npm-crawler

A ready-to-use web spider that works with proxies, asynchrony, rate limit, configurable request pools, jQuery, and HTTP/2 support.

94

1.17x
Overview
Eval results
Files

task.mdevals/scenario-6/

Secure API Data Scraper

Build a web scraper that collects data from a secure API endpoint with strict TLS requirements and custom redirect handling.

Requirements

You need to scrape data from an HTTPS API that has the following characteristics:

  1. Custom TLS Configuration: The API uses a self-signed certificate, so you need to configure the scraper to accept it while still validating the connection properly.

  2. Redirect Handling: The API returns a 302 redirect to the actual data endpoint. Your scraper should follow redirects automatically but limit the maximum number of redirects to 2.

  3. Response Compression: The API returns compressed responses. Your scraper should automatically decompress the response data.

  4. Custom Response Format: The API returns JSON data, and you need to parse it automatically without manually calling JSON.parse().

Implementation

Create a module that exports a function scrapeSecureAPI(url, callback) that:

  • Accepts a URL string as the first parameter
  • Accepts a callback function as the second parameter
  • The callback should be called with (error, data) where data is the parsed response body

The scraper should:

  • Accept self-signed certificates (disable strict SSL verification)
  • Follow redirects with a maximum of 2 redirects
  • Automatically decompress response data
  • Parse JSON responses automatically
  • Use the GET HTTP method

@generates

API

/**
 * Scrapes data from a secure API endpoint with custom TLS and redirect handling.
 *
 * @param {string} url - The URL to scrape
 * @param {Function} callback - Callback function called with (error, data)
 */
function scrapeSecureAPI(url, callback) {
  // Implementation here
}

module.exports = { scrapeSecureAPI };

Test Cases

  • Given a URL "https://api.example.com/data", the scraper successfully fetches and parses JSON data from the endpoint. @test
  • Given a URL that redirects once, the scraper follows the redirect and returns the final data. @test
  • Given a URL with a self-signed certificate, the scraper accepts the certificate and returns data without errors. @test

Dependencies { .dependencies }

crawler { .dependency }

Provides web crawling and scraping functionality with advanced HTTP control.

@satisfied-by

Install with Tessl CLI

npx tessl i tessl/npm-crawler

tile.json