Comprehensive connection to V98Store API supporting 475+ models with health checks and OpenAI-compatible endpoints
Comprehensive connection algorithm for V98Store API providing access to 475+ AI models across multiple providers.
The V98 Connection Algorithm provides a unified interface to connect to V98Store's API gateway, supporting Claude, GPT, Gemini, GLM, O-series, and Codex models through OpenAI-compatible endpoints.
Base URL: https://v98store.com/v1
API Key: Environment variable V98_API_KEY or passed as parameter
Headers:
{
"Authorization": "Bearer {api_key}",
"Content-Type": "application/json"
}Test connection and discover available models.
result = v98_algo.execute({"action": "connect"})
# Returns: 475 models with categorizationList all models with optional filtering.
result = v98_algo.execute({
"action": "list_models",
"filter_by": "claude" # Optional filter
})
# Returns: Filtered model listSend OpenAI-compatible chat completion requests.
result = v98_algo.execute({
"action": "chat",
"model": "claude-opus-4-6",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Your prompt here"}
],
"temperature": 0.7,
"max_tokens": 2000
})Check API health and response times.
result = v98_algo.execute({"action": "health"})
# Returns: Health status and response timefrom core.algorithms.algorithm_manager import AlgorithmManager
manager = AlgorithmManager(auto_scan=True)
v98 = manager.get_algorithm("V98Connection")
# Test connection
result = v98.execute({"action": "connect"})
print(f"Total models: {result.data['total_models']}")
print(f"Categories: {result.data['categories']}")# Find all Codex models
result = v98.execute({
"action": "list_models",
"filter_by": "codex"
})
print(f"Codex models: {result.data['models']}")# Use GPT-5.1-Codex for code generation
result = v98.execute({
"action": "chat",
"model": "gpt-5.1-codex",
"messages": [
{"role": "user", "content": "Write a binary search function in Python"}
],
"temperature": 0.3,
"max_tokens": 1000
})
print(result.data['response'])| Category | Count | Key Models |
|---|---|---|
| Claude | 21 | claude-opus-4-6, claude-sonnet-3-5 |
| GPT | 100 | gpt-4o, gpt-5.1-codex |
| Gemini | 31 | gemini-pro, gemini-1.5-pro |
| GLM | 16 | glm-4.6, glm-4.6v |
| O-series | 23 | o1-preview, o3-mini |
| Codex | 1 | gpt-5.1-codex |
V98_API_KEYconnect once and cache resultsresult.status before using dataresult = v98.execute({"action": "chat", "model": "gpt-4o", ...})
if result.status == "success":
print(result.data['response'])
elif result.status == "error":
print(f"Error: {result.error}")# Used by ThreeAIOrchestrator for:
# - claude-opus-4-6 (Primary Lead)
# - gpt-5.1-codex (Code Reviewer)
# - glm-4.6v (Consultant)# Add to algorithm chain
workflow = [
"V98Connection",
"CodeGenerator",
"TestWriter"
]| Issue | Solution |
|---|---|
| Connection timeout | Check network, verify API key |
| 401 Unauthorized | Validate V98_API_KEY environment variable |
| Model not found | Run list_models to see available options |
| Rate limit hit | Implement exponential backoff |
D:\Antigravity\Dive AI\core\algorithms\operational\v98_connection.py
v2.0 - Enhanced with full model support and categorization
20ba150
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.