CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/pypi-fal-client

Python client library for interacting with machine learning models deployed on the fal.ai platform

Pending
Overview
Eval results
Files

async-operations.mddocs/

Asynchronous Operations

Non-blocking async/await operations for concurrent execution. These functions mirror the synchronous operations but provide async coroutines for high-performance applications requiring concurrent model inference and request management.

Capabilities

Direct Async Inference Execution

Execute ML model inference asynchronously without blocking the event loop. Ideal for applications that need to handle multiple inference requests concurrently.

async def run_async(application: str, arguments: AnyJSON, *, path: str = "", timeout: float | None = None, hint: str | None = None) -> AnyJSON:
    """
    Run an application asynchronously with the given arguments and return the result directly.
    
    Parameters:
    - application: The fal.ai application ID (e.g., "fal-ai/fast-sdxl")
    - arguments: Dictionary of arguments to pass to the model
    - path: Optional subpath when applicable (default: "")
    - timeout: Request timeout in seconds (default: client default_timeout)
    - hint: Optional runner hint for routing (default: None)
    
    Returns:
    dict: The inference result directly from the model
    """

Usage example:

import asyncio
import fal_client

async def main():
    response = await fal_client.run_async(
        "fal-ai/fast-sdxl", 
        arguments={"prompt": "a cute cat, realistic, orange"}
    )
    print(response["images"][0]["url"])

asyncio.run(main())

Async Queue-Based Inference

Submit inference requests to a queue asynchronously and get a handle for tracking progress without blocking other operations.

async def submit_async(application: str, arguments: AnyJSON, *, path: str = "", hint: str | None = None, webhook_url: str | None = None, priority: Priority | None = None) -> AsyncRequestHandle:
    """
    Submit an inference request to the queue asynchronously and return a handle for tracking.
    
    Parameters:
    - application: The fal.ai application ID (e.g., "fal-ai/fast-sdxl")
    - arguments: Dictionary of arguments to pass to the model
    - path: Optional subpath when applicable (default: "")
    - hint: Optional runner hint for routing (default: None)
    - webhook_url: Optional webhook URL for notifications (default: None)
    - priority: Request priority ("normal" or "low", default: None)
    
    Returns:
    AsyncRequestHandle: Handle for tracking the request asynchronously
    """

Usage example:

import asyncio
import fal_client

async def main():
    handle = await fal_client.submit_async(
        "fal-ai/fast-sdxl",
        arguments={"prompt": "a detailed landscape"}
    )

    # Monitor progress asynchronously
    async for event in handle.iter_events(with_logs=True):
        if isinstance(event, fal_client.Queued):
            print(f"Queued at position: {event.position}")
        elif isinstance(event, fal_client.InProgress):
            print("Processing...")
        elif isinstance(event, fal_client.Completed):
            break

    result = await handle.get()
    print(result["images"][0]["url"])

asyncio.run(main())

Async Streaming Inference

Subscribe to streaming updates asynchronously for real-time results without blocking other async operations.

async def subscribe_async(application: str, arguments: AnyJSON, *, path: str = "", hint: str | None = None, with_logs: bool = False, on_enqueue: callable[[str], None] | None = None, on_queue_update: callable[[Status], None] | None = None, priority: Priority | None = None) -> AnyJSON:
    """
    Subscribe to streaming updates for an inference request asynchronously.
    
    Parameters:
    - application: The fal.ai application ID
    - arguments: Dictionary of arguments to pass to the model
    - path: Optional subpath when applicable (default: "")
    - hint: Optional runner hint for routing (default: None)
    - with_logs: Include logs in status updates (default: False)
    - on_enqueue: Callback function called when request is enqueued (default: None)
    - on_queue_update: Callback function called on status updates (default: None)
    - priority: Request priority ("normal" or "low", default: None)
    
    Returns:
    dict: The final inference result after streaming updates complete
    """

Async Real-time Streaming

Stream inference results in real-time asynchronously for models that support progressive output generation.

async def stream_async(application: str, arguments: AnyJSON, *, path: str = "/stream", timeout: float | None = None) -> AsyncIterator[dict[str, Any]]:
    """
    Stream inference results in real-time asynchronously.
    
    Parameters:
    - application: The fal.ai application ID
    - arguments: Dictionary of arguments to pass to the model
    - path: Stream endpoint path (default: "/stream")
    - timeout: Request timeout in seconds (default: None)
    
    Returns:
    AsyncIterator[dict]: Async iterator of streaming results
    """

Usage example:

import asyncio
import fal_client

async def main():
    async for result in fal_client.stream_async(
        "fal-ai/streaming-model",
        arguments={"prompt": "progressive generation"}
    ):
        print(f"Partial result: {result}")

asyncio.run(main())

Async Request Status Operations

Check status, retrieve results, and cancel requests asynchronously using request IDs.

async def status_async(application: str, request_id: str, *, with_logs: bool = False) -> Status:
    """
    Get the current status of a request asynchronously.
    
    Parameters:
    - application: The fal.ai application ID
    - request_id: The request ID to check
    - with_logs: Include logs in the status response (default: False)
    
    Returns:
    Status: Current request status (Queued, InProgress, or Completed)
    """

async def result_async(application: str, request_id: str) -> AnyJSON:
    """
    Get the result of a completed request asynchronously.
    
    Parameters:
    - application: The fal.ai application ID  
    - request_id: The request ID to retrieve results for
    
    Returns:
    dict: The inference result
    """

async def cancel_async(application: str, request_id: str) -> None:
    """
    Cancel a pending or in-progress request asynchronously.
    
    Parameters:
    - application: The fal.ai application ID
    - request_id: The request ID to cancel
    """

Async File Upload Operations

Upload files to the fal.media CDN asynchronously without blocking the event loop.

async def upload_async(data: bytes | str, content_type: str) -> str:
    """
    Upload binary data to fal.media CDN asynchronously.
    
    Parameters:
    - data: The data to upload (bytes or string)
    - content_type: MIME type of the data
    
    Returns:
    str: URL of the uploaded file on fal.media CDN
    """

async def upload_file_async(path: PathLike) -> str:
    """
    Upload a file from the filesystem to fal.media CDN asynchronously.
    
    Parameters:
    - path: Path to the file to upload
    
    Returns:
    str: URL of the uploaded file on fal.media CDN
    """

async def upload_image_async(image: "Image.Image", format: str = "jpeg") -> str:
    """
    Upload a PIL Image object to fal.media CDN asynchronously.
    
    Parameters:
    - image: PIL Image object to upload
    - format: Image format for upload (default: "jpeg")
    
    Returns:
    str: URL of the uploaded image on fal.media CDN
    """

Concurrent Operations Example

import asyncio
import fal_client

async def process_multiple_images():
    """Example of running multiple inference requests concurrently."""
    
    prompts = [
        "a cat in a forest",
        "a dog on a beach", 
        "a bird in the sky"
    ]
    
    # Submit all requests concurrently
    tasks = [
        fal_client.run_async("fal-ai/fast-sdxl", arguments={"prompt": prompt})
        for prompt in prompts
    ]
    
    # Wait for all to complete
    results = await asyncio.gather(*tasks)
    
    for i, result in enumerate(results):
        print(f"Image {i+1}: {result['images'][0]['url']}")

asyncio.run(process_multiple_images())

Install with Tessl CLI

npx tessl i tessl/pypi-fal-client

docs

async-operations.md

auth-config.md

file-processing.md

index.md

request-management.md

sync-operations.md

tile.json