A ready-to-use web spider that works with proxies, asynchrony, rate limit, configurable request pools, jQuery, and HTTP/2 support.
94
Pending
Does it follow best practices?
Impact
94%
1.17xAverage score across 10 eval scenarios
Pending
The risk profile of this skill
{
"context": "This criteria evaluates how well the engineer uses the crawler package's advanced Got options passthrough capability to configure TLS settings, redirect handling, response decompression, and response type parsing for secure API scraping.",
"type": "weighted_checklist",
"checklist": [
{
"name": "Crawler instantiation",
"description": "Creates a Crawler instance using 'new Crawler()' or imports and uses the crawler package correctly",
"max_score": 15
},
{
"name": "SSL verification control",
"description": "Uses the 'rejectUnauthorized: false' option to accept self-signed certificates (this is a Got option passed through by crawler)",
"max_score": 25
},
{
"name": "Redirect configuration",
"description": "Configures redirect handling using Got options like 'followRedirect' and 'maxRedirects' (or 'redirect.limit') to limit redirects to 2",
"max_score": 25
},
{
"name": "Decompression handling",
"description": "Enables automatic response decompression using the 'decompress: true' Got option (default is true, but explicitly setting shows awareness)",
"max_score": 15
},
{
"name": "JSON parsing",
"description": "Configures automatic JSON parsing using 'responseType: \"json\"' or 'isJson: true' options to parse JSON responses without manual JSON.parse()",
"max_score": 20
}
]
}evals
scenario-1
scenario-2
scenario-3
scenario-4
scenario-5
scenario-6
scenario-7
scenario-8
scenario-9
scenario-10