A persistent cache for python requests
76
Build a multi-threaded web scraper that safely caches HTTP responses to avoid redundant requests when multiple threads access the same URLs.
Your solution must:
@generates
Create a Python module that exports a scrape_urls function.
def scrape_urls(urls: list[str], num_threads: int = 5) -> dict[str, str]:
"""
Scrape multiple URLs concurrently with safe caching.
Args:
urls: List of URLs to scrape
num_threads: Number of concurrent threads to use
Returns:
Dictionary mapping URLs to their response content
"""
passProvides HTTP caching with thread-safe operations.
Provides thread support for concurrent execution.
Install with Tessl CLI
npx tessl i tessl/pypi-requests-cacheevals
scenario-1
scenario-2
scenario-3
scenario-4
scenario-5
scenario-6
scenario-7
scenario-8
scenario-9
scenario-10