CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/pypi-webuiapi

Python API client for AUTOMATIC1111/stable-diffusion-webui enabling programmatic Stable Diffusion image generation

Overview
Eval results
Files

controlnet.mddocs/

ControlNet

Precise image generation control using ControlNet with support for depth, canny, pose, and other conditioning methods. ControlNet enables fine-grained control over image composition and structure while maintaining creative flexibility.

Capabilities

ControlNet Configuration

Configure ControlNet units for precise image conditioning during generation.

class ControlNetUnit:
    """Configuration for a single ControlNet unit."""
    
    def __init__(
        self,
        image: Image.Image = None,
        mask: Image.Image = None,
        module: str = "none",
        model: str = "None",
        weight: float = 1.0,
        resize_mode: str = "Resize and Fill",
        low_vram: bool = False,
        processor_res: int = 512,
        threshold_a: float = 64,
        threshold_b: float = 64,
        guidance_start: float = 0.0,
        guidance_end: float = 1.0,
        control_mode: int = 0,
        pixel_perfect: bool = False,
        hr_option: str = "Both",
        enabled: bool = True
    ):
        """
        Initialize ControlNet unit configuration.

        Parameters:
        - image: Input control image (depth map, canny edges, pose, etc.)
        - mask: Optional mask for selective control
        - module: Preprocessor module ("canny", "depth", "openpose", "lineart", etc.)
        - model: ControlNet model name ("control_canny", "control_depth", etc.)
        - weight: Control strength (0.0-2.0, default 1.0)
        - resize_mode: How to handle size differences
          * "Resize and Fill": Resize and pad/crop as needed
          * "Crop and Resize": Crop to fit then resize
          * "Just Resize": Simple resize (may distort)
        - low_vram: Enable low VRAM mode for memory-constrained systems
        - processor_res: Resolution for preprocessing (default 512)
        - threshold_a: First threshold parameter for preprocessor
        - threshold_b: Second threshold parameter for preprocessor
        - guidance_start: When to start applying control (0.0-1.0)
        - guidance_end: When to stop applying control (0.0-1.0)
        - control_mode: Control balance mode
          * 0: "Balanced" - Balance between prompt and control
          * 1: "My prompt is more important" - Favor text prompt
          * 2: "ControlNet is more important" - Favor control input
        - pixel_perfect: Enable pixel-perfect mode for optimal results
        - hr_option: High-res behavior ("Both", "Low res only", "High res only")
        - enabled: Whether this unit is active
        """

    def to_dict(self) -> Dict:
        """Convert to dictionary format for API submission."""

ControlNet API Integration

Direct ControlNet API methods for preprocessing and model management.

def controlnet_version() -> str:
    """
    Get ControlNet extension version.

    Returns:
    Version string of installed ControlNet extension
    """

def controlnet_model_list() -> List[str]:
    """
    Get list of available ControlNet models.

    Returns:
    List of ControlNet model names available for use
    """

def controlnet_module_list() -> List[str]:
    """
    Get list of available ControlNet preprocessor modules.

    Returns:
    List of preprocessor module names (canny, depth, openpose, etc.)
    """

def controlnet_detect(
    controlnet_module: str,
    controlnet_input_images: List[str],
    controlnet_processor_res: int = 512,
    controlnet_threshold_a: float = 64,
    controlnet_threshold_b: float = 64,
    **kwargs
) -> Dict:
    """
    Run ControlNet preprocessing on images.

    Parameters:
    - controlnet_module: Preprocessor module name
    - controlnet_input_images: List of base64-encoded input images
    - controlnet_processor_res: Processing resolution
    - controlnet_threshold_a: First threshold parameter
    - controlnet_threshold_b: Second threshold parameter

    Returns:
    Dictionary containing processed control images
    """

Usage Examples:

from PIL import Image
import webuiapi

api = webuiapi.WebUIApi()

# Check ControlNet availability
print(f"ControlNet version: {api.controlnet_version()}")
print(f"Available models: {api.controlnet_model_list()}")
print(f"Available modules: {api.controlnet_module_list()}")

# Load reference image
reference_image = Image.open("reference_pose.jpg")

# Create ControlNet unit for pose control
pose_unit = webuiapi.ControlNetUnit(
    image=reference_image,
    module="openpose_full",
    model="control_openpose",
    weight=1.0,
    guidance_start=0.0,
    guidance_end=0.8,
    control_mode=0,  # Balanced
    pixel_perfect=True
)

# Generate image with pose control
result = api.txt2img(
    prompt="a warrior in medieval armor, detailed, cinematic lighting",
    negative_prompt="blurry, low quality",
    width=512,
    height=768,
    controlnet_units=[pose_unit]
)

result.image.save("controlled_generation.png")

# Multiple ControlNet units for complex control
depth_image = Image.open("depth_map.png")
canny_image = Image.open("canny_edges.png")

depth_unit = webuiapi.ControlNetUnit(
    image=depth_image,
    module="depth_midas",
    model="control_depth",
    weight=0.8,
    control_mode=2  # ControlNet more important
)

canny_unit = webuiapi.ControlNetUnit(
    image=canny_image,
    module="canny",
    model="control_canny",
    weight=0.6,
    threshold_a=50,
    threshold_b=200
)

# Generate with multiple controls
result = api.txt2img(
    prompt="futuristic cityscape, neon lights, cyberpunk",
    width=768,
    height=512,
    controlnet_units=[depth_unit, canny_unit]
)

# Preprocessing example - extract edges from photo
photo = Image.open("photo.jpg")
photo_b64 = webuiapi.raw_b64_img(photo)

# Detect edges using Canny
canny_result = api.controlnet_detect(
    controlnet_module="canny",
    controlnet_input_images=[photo_b64],
    controlnet_threshold_a=100,
    controlnet_threshold_b=200
)

# The result contains processed control images that can be used
# in subsequent generations

# Advanced control with temporal consistency
sequence_images = [
    Image.open(f"frame_{i:03d}.jpg") for i in range(10)
]

for i, frame in enumerate(sequence_images):
    control_unit = webuiapi.ControlNetUnit(
        image=frame,
        module="openpose_full",
        model="control_openpose",
        weight=1.2,
        guidance_start=0.1,
        guidance_end=0.9,
        pixel_perfect=True
    )
    
    result = api.txt2img(
        prompt="animated character dancing, consistent style",
        seed=12345,  # Keep seed consistent for style
        controlnet_units=[control_unit]
    )
    
    result.image.save(f"controlled_frame_{i:03d}.png")

Common ControlNet Modules and Models

Popular Preprocessor Modules

  • canny: Edge detection using Canny algorithm
  • depth_midas: Depth estimation using MiDaS
  • depth_leres: High-quality depth using LeReS
  • openpose_full: Full body pose detection
  • openpose_hand: Hand pose detection only
  • openpose_face: Face pose detection only
  • lineart: Line art extraction
  • lineart_anime: Anime-style line art
  • seg_ofade20k: Segmentation using ADE20K
  • normal_map: Surface normal estimation
  • mlsd: Line segment detection
  • scribble: Scribble/sketch processing

Corresponding Models

  • control_canny: For canny edge control
  • control_depth: For depth map control
  • control_openpose: For pose control
  • control_lineart: For line art control
  • control_seg: For segmentation control
  • control_normal: For normal map control
  • control_mlsd: For line segment control
  • control_scribble: For scribble control

Types

class ControlNetUnit:
    """ControlNet configuration unit."""
    image: Optional[Image.Image]  # Control input image
    mask: Optional[Image.Image]  # Optional mask
    module: str  # Preprocessor module name
    model: str  # ControlNet model name
    weight: float  # Control strength (0.0-2.0)
    resize_mode: str  # Resize handling mode
    low_vram: bool  # Low VRAM mode
    processor_res: int  # Processing resolution
    threshold_a: float  # First threshold
    threshold_b: float  # Second threshold
    guidance_start: float  # Control start timing (0.0-1.0)
    guidance_end: float  # Control end timing (0.0-1.0)
    control_mode: int  # Control balance mode (0-2)
    pixel_perfect: bool  # Pixel-perfect mode
    hr_option: str  # High-res behavior
    enabled: bool  # Unit enabled status

Install with Tessl CLI

npx tessl i tessl/pypi-webuiapi

docs

configuration.md

controlnet.md

extensions.md

image-generation.md

image-processing.md

index.md

interfaces.md

model-management.md

tile.json