Integrate PostHog into any python application.
89
Build a simple AI chat application that makes OpenAI API calls and automatically tracks usage analytics including streaming response handling.
Create a chat application that:
Makes streaming chat completion requests - Send prompts to OpenAI and receive streaming responses (where tokens arrive incrementally)
Tracks all interactions - Automatically capture analytics events for each chat completion including:
Returns usable responses - The streaming response should be properly handled so the complete text can be returned to the user
The analytics tracking should work seamlessly with streaming responses, accumulating chunks and recording metrics without requiring manual implementation.
Implement the following test scenarios:
@generates
"""
AI Chat Assistant with Analytics
"""
def setup_ai_client(openai_api_key: str, analytics_api_key: str):
"""
Initialize the AI client with analytics tracking enabled.
Args:
openai_api_key: API key for OpenAI
analytics_api_key: API key for analytics platform
Returns:
Configured AI client ready for chat completions
"""
pass
def send_chat_message(client, user_id: str, message: str, stream: bool = True) -> str:
"""
Send a chat message and get a response.
Args:
client: Configured AI client
user_id: User identifier for analytics
message: The chat message to send
stream: Whether to use streaming responses
Returns:
The complete response text from the AI
"""
passProvides analytics and event tracking capabilities, including AI observability features for tracking LLM usage.
@satisfied-by
Provides OpenAI API client for chat completions.
@satisfied-by
Install with Tessl CLI
npx tessl i tessl/pypi-posthogdocs
evals
scenario-1
scenario-2
scenario-3
scenario-4
scenario-5
scenario-6
scenario-7
scenario-8
scenario-9
scenario-10