or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

configuration.mdcontrolnet.mdextensions.mdimage-generation.mdimage-processing.mdindex.mdinterfaces.mdmodel-management.md

controlnet.mddocs/

0

# ControlNet

1

2

Precise image generation control using ControlNet with support for depth, canny, pose, and other conditioning methods. ControlNet enables fine-grained control over image composition and structure while maintaining creative flexibility.

3

4

## Capabilities

5

6

### ControlNet Configuration

7

8

Configure ControlNet units for precise image conditioning during generation.

9

10

```python { .api }

11

class ControlNetUnit:

12

"""Configuration for a single ControlNet unit."""

13

14

def __init__(

15

self,

16

image: Image.Image = None,

17

mask: Image.Image = None,

18

module: str = "none",

19

model: str = "None",

20

weight: float = 1.0,

21

resize_mode: str = "Resize and Fill",

22

low_vram: bool = False,

23

processor_res: int = 512,

24

threshold_a: float = 64,

25

threshold_b: float = 64,

26

guidance_start: float = 0.0,

27

guidance_end: float = 1.0,

28

control_mode: int = 0,

29

pixel_perfect: bool = False,

30

hr_option: str = "Both",

31

enabled: bool = True

32

):

33

"""

34

Initialize ControlNet unit configuration.

35

36

Parameters:

37

- image: Input control image (depth map, canny edges, pose, etc.)

38

- mask: Optional mask for selective control

39

- module: Preprocessor module ("canny", "depth", "openpose", "lineart", etc.)

40

- model: ControlNet model name ("control_canny", "control_depth", etc.)

41

- weight: Control strength (0.0-2.0, default 1.0)

42

- resize_mode: How to handle size differences

43

* "Resize and Fill": Resize and pad/crop as needed

44

* "Crop and Resize": Crop to fit then resize

45

* "Just Resize": Simple resize (may distort)

46

- low_vram: Enable low VRAM mode for memory-constrained systems

47

- processor_res: Resolution for preprocessing (default 512)

48

- threshold_a: First threshold parameter for preprocessor

49

- threshold_b: Second threshold parameter for preprocessor

50

- guidance_start: When to start applying control (0.0-1.0)

51

- guidance_end: When to stop applying control (0.0-1.0)

52

- control_mode: Control balance mode

53

* 0: "Balanced" - Balance between prompt and control

54

* 1: "My prompt is more important" - Favor text prompt

55

* 2: "ControlNet is more important" - Favor control input

56

- pixel_perfect: Enable pixel-perfect mode for optimal results

57

- hr_option: High-res behavior ("Both", "Low res only", "High res only")

58

- enabled: Whether this unit is active

59

"""

60

61

def to_dict(self) -> Dict:

62

"""Convert to dictionary format for API submission."""

63

```

64

65

### ControlNet API Integration

66

67

Direct ControlNet API methods for preprocessing and model management.

68

69

```python { .api }

70

def controlnet_version() -> str:

71

"""

72

Get ControlNet extension version.

73

74

Returns:

75

Version string of installed ControlNet extension

76

"""

77

78

def controlnet_model_list() -> List[str]:

79

"""

80

Get list of available ControlNet models.

81

82

Returns:

83

List of ControlNet model names available for use

84

"""

85

86

def controlnet_module_list() -> List[str]:

87

"""

88

Get list of available ControlNet preprocessor modules.

89

90

Returns:

91

List of preprocessor module names (canny, depth, openpose, etc.)

92

"""

93

94

def controlnet_detect(

95

controlnet_module: str,

96

controlnet_input_images: List[str],

97

controlnet_processor_res: int = 512,

98

controlnet_threshold_a: float = 64,

99

controlnet_threshold_b: float = 64,

100

**kwargs

101

) -> Dict:

102

"""

103

Run ControlNet preprocessing on images.

104

105

Parameters:

106

- controlnet_module: Preprocessor module name

107

- controlnet_input_images: List of base64-encoded input images

108

- controlnet_processor_res: Processing resolution

109

- controlnet_threshold_a: First threshold parameter

110

- controlnet_threshold_b: Second threshold parameter

111

112

Returns:

113

Dictionary containing processed control images

114

"""

115

```

116

117

**Usage Examples:**

118

119

```python

120

from PIL import Image

121

import webuiapi

122

123

api = webuiapi.WebUIApi()

124

125

# Check ControlNet availability

126

print(f"ControlNet version: {api.controlnet_version()}")

127

print(f"Available models: {api.controlnet_model_list()}")

128

print(f"Available modules: {api.controlnet_module_list()}")

129

130

# Load reference image

131

reference_image = Image.open("reference_pose.jpg")

132

133

# Create ControlNet unit for pose control

134

pose_unit = webuiapi.ControlNetUnit(

135

image=reference_image,

136

module="openpose_full",

137

model="control_openpose",

138

weight=1.0,

139

guidance_start=0.0,

140

guidance_end=0.8,

141

control_mode=0, # Balanced

142

pixel_perfect=True

143

)

144

145

# Generate image with pose control

146

result = api.txt2img(

147

prompt="a warrior in medieval armor, detailed, cinematic lighting",

148

negative_prompt="blurry, low quality",

149

width=512,

150

height=768,

151

controlnet_units=[pose_unit]

152

)

153

154

result.image.save("controlled_generation.png")

155

156

# Multiple ControlNet units for complex control

157

depth_image = Image.open("depth_map.png")

158

canny_image = Image.open("canny_edges.png")

159

160

depth_unit = webuiapi.ControlNetUnit(

161

image=depth_image,

162

module="depth_midas",

163

model="control_depth",

164

weight=0.8,

165

control_mode=2 # ControlNet more important

166

)

167

168

canny_unit = webuiapi.ControlNetUnit(

169

image=canny_image,

170

module="canny",

171

model="control_canny",

172

weight=0.6,

173

threshold_a=50,

174

threshold_b=200

175

)

176

177

# Generate with multiple controls

178

result = api.txt2img(

179

prompt="futuristic cityscape, neon lights, cyberpunk",

180

width=768,

181

height=512,

182

controlnet_units=[depth_unit, canny_unit]

183

)

184

185

# Preprocessing example - extract edges from photo

186

photo = Image.open("photo.jpg")

187

photo_b64 = webuiapi.raw_b64_img(photo)

188

189

# Detect edges using Canny

190

canny_result = api.controlnet_detect(

191

controlnet_module="canny",

192

controlnet_input_images=[photo_b64],

193

controlnet_threshold_a=100,

194

controlnet_threshold_b=200

195

)

196

197

# The result contains processed control images that can be used

198

# in subsequent generations

199

200

# Advanced control with temporal consistency

201

sequence_images = [

202

Image.open(f"frame_{i:03d}.jpg") for i in range(10)

203

]

204

205

for i, frame in enumerate(sequence_images):

206

control_unit = webuiapi.ControlNetUnit(

207

image=frame,

208

module="openpose_full",

209

model="control_openpose",

210

weight=1.2,

211

guidance_start=0.1,

212

guidance_end=0.9,

213

pixel_perfect=True

214

)

215

216

result = api.txt2img(

217

prompt="animated character dancing, consistent style",

218

seed=12345, # Keep seed consistent for style

219

controlnet_units=[control_unit]

220

)

221

222

result.image.save(f"controlled_frame_{i:03d}.png")

223

```

224

225

## Common ControlNet Modules and Models

226

227

### Popular Preprocessor Modules

228

229

- **canny**: Edge detection using Canny algorithm

230

- **depth_midas**: Depth estimation using MiDaS

231

- **depth_leres**: High-quality depth using LeReS

232

- **openpose_full**: Full body pose detection

233

- **openpose_hand**: Hand pose detection only

234

- **openpose_face**: Face pose detection only

235

- **lineart**: Line art extraction

236

- **lineart_anime**: Anime-style line art

237

- **seg_ofade20k**: Segmentation using ADE20K

238

- **normal_map**: Surface normal estimation

239

- **mlsd**: Line segment detection

240

- **scribble**: Scribble/sketch processing

241

242

### Corresponding Models

243

244

- **control_canny**: For canny edge control

245

- **control_depth**: For depth map control

246

- **control_openpose**: For pose control

247

- **control_lineart**: For line art control

248

- **control_seg**: For segmentation control

249

- **control_normal**: For normal map control

250

- **control_mlsd**: For line segment control

251

- **control_scribble**: For scribble control

252

253

## Types

254

255

```python { .api }

256

class ControlNetUnit:

257

"""ControlNet configuration unit."""

258

image: Optional[Image.Image] # Control input image

259

mask: Optional[Image.Image] # Optional mask

260

module: str # Preprocessor module name

261

model: str # ControlNet model name

262

weight: float # Control strength (0.0-2.0)

263

resize_mode: str # Resize handling mode

264

low_vram: bool # Low VRAM mode

265

processor_res: int # Processing resolution

266

threshold_a: float # First threshold

267

threshold_b: float # Second threshold

268

guidance_start: float # Control start timing (0.0-1.0)

269

guidance_end: float # Control end timing (0.0-1.0)

270

control_mode: int # Control balance mode (0-2)

271

pixel_perfect: bool # Pixel-perfect mode

272

hr_option: str # High-res behavior

273

enabled: bool # Unit enabled status

274

```