or run

tessl search
Log in

Version

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
pypipkg:pypi/fastcore@1.8.x
tile.json

tessl/pypi-fastcore

tessl install tessl/pypi-fastcore@1.8.0

Python supercharged for fastai development

Agent Success

Agent success rate when using this tile

56%

Improvement

Agent success rate improvement when using this tile compared to baseline

1.37x

Baseline

Agent success rate without this tile

41%

task.mdevals/scenario-10/

Smart API Response Cache

Build a caching system for API responses that automatically invalidates cached data based on custom eviction policies. The system should support time-based expiration and allow for custom eviction strategies.

Requirements

You need to implement a function that fetches data from an API endpoint with intelligent caching:

  1. Time-based cache expiration: Cached responses should automatically expire after a specified time-to-live (TTL) period
  2. Custom eviction policy: Support a custom eviction policy that can determine when cached items should be removed based on response metadata (e.g., response size, status code)
  3. Cache statistics: Track and return cache hit/miss statistics

Implementation Details

Create a module that exports the following functions:

  • A function fetch_with_cache(url, ttl_seconds) that fetches data from a URL and caches it with the specified TTL in seconds
  • A function fetch_with_policy(url, policy_func) that fetches data from a URL and uses a custom eviction policy function
  • A function get_cache_stats() that returns a dictionary with hits and misses counts

The custom policy function should receive metadata about the cached item and return True if the item should be kept, False if it should be evicted.

For testing purposes, you can simulate API calls using a simple function that returns mock data instead of making actual HTTP requests.

Test Cases

  • Calling fetch_with_cache with the same URL twice within the TTL period returns cached data (second call is a cache hit) @test
  • Calling fetch_with_cache with the same URL after the TTL expires fetches fresh data (cache miss after expiry) @test
  • Using a custom policy that rejects responses over 1KB evicts large cached items @test
  • Cache statistics correctly track hits and misses across multiple calls @test

Implementation

@generates

API

def fetch_with_cache(url: str, ttl_seconds: float) -> dict:
    """
    Fetch data from URL with time-based caching.

    Args:
        url: The URL to fetch data from
        ttl_seconds: Time-to-live for cached data in seconds

    Returns:
        dict: The fetched or cached data
    """
    pass

def fetch_with_policy(url: str, policy_func) -> dict:
    """
    Fetch data from URL with custom eviction policy.

    Args:
        url: The URL to fetch data from
        policy_func: Function that takes metadata dict and returns bool
                    (True to keep, False to evict)

    Returns:
        dict: The fetched or cached data
    """
    pass

def get_cache_stats() -> dict:
    """
    Get cache hit/miss statistics.

    Returns:
        dict: Dictionary with 'hits' and 'misses' counts
    """
    pass

Dependencies { .dependencies }

fastcore { .dependency }

Provides advanced caching utilities with custom eviction policies.

@satisfied-by