CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/pypi-posthog

Integrate PostHog into any python application.

89

1.03x
Overview
Eval results
Files

task.mdevals/scenario-6/

AI Chat Assistant with Analytics

Build a simple AI chat application that makes OpenAI API calls and automatically tracks usage analytics including streaming response handling.

Requirements

Create a chat application that:

  1. Makes streaming chat completion requests - Send prompts to OpenAI and receive streaming responses (where tokens arrive incrementally)

  2. Tracks all interactions - Automatically capture analytics events for each chat completion including:

    • Complete prompt and response text
    • Token usage (input tokens, output tokens, cached tokens)
    • Model information and provider details
    • Request latency
  3. Returns usable responses - The streaming response should be properly handled so the complete text can be returned to the user

The analytics tracking should work seamlessly with streaming responses, accumulating chunks and recording metrics without requiring manual implementation.

Test Cases

Implement the following test scenarios:

  • When making a streaming chat request with a simple prompt, the complete response is returned and an analytics event is generated @test
  • The analytics event contains the full response text, input token count, output token count, and model name @test
  • Multiple sequential streaming chat requests each generate separate analytics events with correct token counts @test

@generates

API

"""
AI Chat Assistant with Analytics
"""

def setup_ai_client(openai_api_key: str, analytics_api_key: str):
    """
    Initialize the AI client with analytics tracking enabled.

    Args:
        openai_api_key: API key for OpenAI
        analytics_api_key: API key for analytics platform

    Returns:
        Configured AI client ready for chat completions
    """
    pass

def send_chat_message(client, user_id: str, message: str, stream: bool = True) -> str:
    """
    Send a chat message and get a response.

    Args:
        client: Configured AI client
        user_id: User identifier for analytics
        message: The chat message to send
        stream: Whether to use streaming responses

    Returns:
        The complete response text from the AI
    """
    pass

Dependencies { .dependencies }

posthog { .dependency }

Provides analytics and event tracking capabilities, including AI observability features for tracking LLM usage.

@satisfied-by

openai { .dependency }

Provides OpenAI API client for chat completions.

@satisfied-by

Install with Tessl CLI

npx tessl i tessl/pypi-posthog

tile.json