or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

async-operations.mdauth-config.mdfile-processing.mdindex.mdrequest-management.mdsync-operations.md

async-operations.mddocs/

0

# Asynchronous Operations

1

2

Non-blocking async/await operations for concurrent execution. These functions mirror the synchronous operations but provide async coroutines for high-performance applications requiring concurrent model inference and request management.

3

4

## Capabilities

5

6

### Direct Async Inference Execution

7

8

Execute ML model inference asynchronously without blocking the event loop. Ideal for applications that need to handle multiple inference requests concurrently.

9

10

```python { .api }

11

async def run_async(application: str, arguments: AnyJSON, *, path: str = "", timeout: float | None = None, hint: str | None = None) -> AnyJSON:

12

"""

13

Run an application asynchronously with the given arguments and return the result directly.

14

15

Parameters:

16

- application: The fal.ai application ID (e.g., "fal-ai/fast-sdxl")

17

- arguments: Dictionary of arguments to pass to the model

18

- path: Optional subpath when applicable (default: "")

19

- timeout: Request timeout in seconds (default: client default_timeout)

20

- hint: Optional runner hint for routing (default: None)

21

22

Returns:

23

dict: The inference result directly from the model

24

"""

25

```

26

27

Usage example:

28

```python

29

import asyncio

30

import fal_client

31

32

async def main():

33

response = await fal_client.run_async(

34

"fal-ai/fast-sdxl",

35

arguments={"prompt": "a cute cat, realistic, orange"}

36

)

37

print(response["images"][0]["url"])

38

39

asyncio.run(main())

40

```

41

42

### Async Queue-Based Inference

43

44

Submit inference requests to a queue asynchronously and get a handle for tracking progress without blocking other operations.

45

46

```python { .api }

47

async def submit_async(application: str, arguments: AnyJSON, *, path: str = "", hint: str | None = None, webhook_url: str | None = None, priority: Priority | None = None) -> AsyncRequestHandle:

48

"""

49

Submit an inference request to the queue asynchronously and return a handle for tracking.

50

51

Parameters:

52

- application: The fal.ai application ID (e.g., "fal-ai/fast-sdxl")

53

- arguments: Dictionary of arguments to pass to the model

54

- path: Optional subpath when applicable (default: "")

55

- hint: Optional runner hint for routing (default: None)

56

- webhook_url: Optional webhook URL for notifications (default: None)

57

- priority: Request priority ("normal" or "low", default: None)

58

59

Returns:

60

AsyncRequestHandle: Handle for tracking the request asynchronously

61

"""

62

```

63

64

Usage example:

65

```python

66

import asyncio

67

import fal_client

68

69

async def main():

70

handle = await fal_client.submit_async(

71

"fal-ai/fast-sdxl",

72

arguments={"prompt": "a detailed landscape"}

73

)

74

75

# Monitor progress asynchronously

76

async for event in handle.iter_events(with_logs=True):

77

if isinstance(event, fal_client.Queued):

78

print(f"Queued at position: {event.position}")

79

elif isinstance(event, fal_client.InProgress):

80

print("Processing...")

81

elif isinstance(event, fal_client.Completed):

82

break

83

84

result = await handle.get()

85

print(result["images"][0]["url"])

86

87

asyncio.run(main())

88

```

89

90

### Async Streaming Inference

91

92

Subscribe to streaming updates asynchronously for real-time results without blocking other async operations.

93

94

```python { .api }

95

async def subscribe_async(application: str, arguments: AnyJSON, *, path: str = "", hint: str | None = None, with_logs: bool = False, on_enqueue: callable[[str], None] | None = None, on_queue_update: callable[[Status], None] | None = None, priority: Priority | None = None) -> AnyJSON:

96

"""

97

Subscribe to streaming updates for an inference request asynchronously.

98

99

Parameters:

100

- application: The fal.ai application ID

101

- arguments: Dictionary of arguments to pass to the model

102

- path: Optional subpath when applicable (default: "")

103

- hint: Optional runner hint for routing (default: None)

104

- with_logs: Include logs in status updates (default: False)

105

- on_enqueue: Callback function called when request is enqueued (default: None)

106

- on_queue_update: Callback function called on status updates (default: None)

107

- priority: Request priority ("normal" or "low", default: None)

108

109

Returns:

110

dict: The final inference result after streaming updates complete

111

"""

112

```

113

114

### Async Real-time Streaming

115

116

Stream inference results in real-time asynchronously for models that support progressive output generation.

117

118

```python { .api }

119

async def stream_async(application: str, arguments: AnyJSON, *, path: str = "/stream", timeout: float | None = None) -> AsyncIterator[dict[str, Any]]:

120

"""

121

Stream inference results in real-time asynchronously.

122

123

Parameters:

124

- application: The fal.ai application ID

125

- arguments: Dictionary of arguments to pass to the model

126

- path: Stream endpoint path (default: "/stream")

127

- timeout: Request timeout in seconds (default: None)

128

129

Returns:

130

AsyncIterator[dict]: Async iterator of streaming results

131

"""

132

```

133

134

Usage example:

135

```python

136

import asyncio

137

import fal_client

138

139

async def main():

140

async for result in fal_client.stream_async(

141

"fal-ai/streaming-model",

142

arguments={"prompt": "progressive generation"}

143

):

144

print(f"Partial result: {result}")

145

146

asyncio.run(main())

147

```

148

149

### Async Request Status Operations

150

151

Check status, retrieve results, and cancel requests asynchronously using request IDs.

152

153

```python { .api }

154

async def status_async(application: str, request_id: str, *, with_logs: bool = False) -> Status:

155

"""

156

Get the current status of a request asynchronously.

157

158

Parameters:

159

- application: The fal.ai application ID

160

- request_id: The request ID to check

161

- with_logs: Include logs in the status response (default: False)

162

163

Returns:

164

Status: Current request status (Queued, InProgress, or Completed)

165

"""

166

167

async def result_async(application: str, request_id: str) -> AnyJSON:

168

"""

169

Get the result of a completed request asynchronously.

170

171

Parameters:

172

- application: The fal.ai application ID

173

- request_id: The request ID to retrieve results for

174

175

Returns:

176

dict: The inference result

177

"""

178

179

async def cancel_async(application: str, request_id: str) -> None:

180

"""

181

Cancel a pending or in-progress request asynchronously.

182

183

Parameters:

184

- application: The fal.ai application ID

185

- request_id: The request ID to cancel

186

"""

187

```

188

189

### Async File Upload Operations

190

191

Upload files to the fal.media CDN asynchronously without blocking the event loop.

192

193

```python { .api }

194

async def upload_async(data: bytes | str, content_type: str) -> str:

195

"""

196

Upload binary data to fal.media CDN asynchronously.

197

198

Parameters:

199

- data: The data to upload (bytes or string)

200

- content_type: MIME type of the data

201

202

Returns:

203

str: URL of the uploaded file on fal.media CDN

204

"""

205

206

async def upload_file_async(path: PathLike) -> str:

207

"""

208

Upload a file from the filesystem to fal.media CDN asynchronously.

209

210

Parameters:

211

- path: Path to the file to upload

212

213

Returns:

214

str: URL of the uploaded file on fal.media CDN

215

"""

216

217

async def upload_image_async(image: "Image.Image", format: str = "jpeg") -> str:

218

"""

219

Upload a PIL Image object to fal.media CDN asynchronously.

220

221

Parameters:

222

- image: PIL Image object to upload

223

- format: Image format for upload (default: "jpeg")

224

225

Returns:

226

str: URL of the uploaded image on fal.media CDN

227

"""

228

```

229

230

### Concurrent Operations Example

231

232

```python

233

import asyncio

234

import fal_client

235

236

async def process_multiple_images():

237

"""Example of running multiple inference requests concurrently."""

238

239

prompts = [

240

"a cat in a forest",

241

"a dog on a beach",

242

"a bird in the sky"

243

]

244

245

# Submit all requests concurrently

246

tasks = [

247

fal_client.run_async("fal-ai/fast-sdxl", arguments={"prompt": prompt})

248

for prompt in prompts

249

]

250

251

# Wait for all to complete

252

results = await asyncio.gather(*tasks)

253

254

for i, result in enumerate(results):

255

print(f"Image {i+1}: {result['images'][0]['url']}")

256

257

asyncio.run(process_multiple_images())

258

```