0
# Synchronous Operations
1
2
Direct execution and queue-based operations with blocking I/O. These functions provide immediate access to fal.ai model inference capabilities with traditional synchronous Python patterns.
3
4
## Capabilities
5
6
### Direct Inference Execution
7
8
Execute ML model inference directly and return results immediately. This is the simplest way to run inference when you don't need queue tracking or status monitoring.
9
10
```python { .api }
11
def run(application: str, arguments: AnyJSON, *, path: str = "", timeout: float | None = None, hint: str | None = None) -> AnyJSON:
12
"""
13
Run an application with the given arguments and return the result directly.
14
15
Parameters:
16
- application: The fal.ai application ID (e.g., "fal-ai/fast-sdxl")
17
- arguments: Dictionary of arguments to pass to the model
18
- path: Optional subpath when applicable (default: "")
19
- timeout: Request timeout in seconds (default: client default_timeout)
20
- hint: Optional runner hint for routing (default: None)
21
22
Returns:
23
dict: The inference result directly from the model
24
"""
25
```
26
27
Usage example:
28
```python
29
import fal_client
30
31
response = fal_client.run(
32
"fal-ai/fast-sdxl",
33
arguments={"prompt": "a cute cat, realistic, orange"}
34
)
35
print(response["images"][0]["url"])
36
```
37
38
### Queue-Based Inference
39
40
Submit inference requests to a queue and get a handle for tracking progress. This is ideal for long-running models where you need to monitor status and handle potential queuing delays.
41
42
```python { .api }
43
def submit(application: str, arguments: AnyJSON, *, path: str = "", hint: str | None = None, webhook_url: str | None = None, priority: Priority | None = None) -> SyncRequestHandle:
44
"""
45
Submit an inference request to the queue and return a handle for tracking.
46
47
Parameters:
48
- application: The fal.ai application ID (e.g., "fal-ai/fast-sdxl")
49
- arguments: Dictionary of arguments to pass to the model
50
- path: Optional subpath when applicable (default: "")
51
- hint: Optional runner hint for routing (default: None)
52
- webhook_url: Optional webhook URL for notifications (default: None)
53
- priority: Request priority ("normal" or "low", default: None)
54
55
Returns:
56
SyncRequestHandle: Handle for tracking the request
57
"""
58
```
59
60
Usage example:
61
```python
62
import fal_client
63
64
handle = fal_client.submit(
65
"fal-ai/fast-sdxl",
66
arguments={"prompt": "a detailed landscape"}
67
)
68
69
# Monitor progress
70
for event in handle.iter_events(with_logs=True):
71
if isinstance(event, fal_client.Queued):
72
print(f"Queued at position: {event.position}")
73
elif isinstance(event, fal_client.InProgress):
74
print("Processing...")
75
elif isinstance(event, fal_client.Completed):
76
break
77
78
result = handle.get()
79
print(result["images"][0]["url"])
80
```
81
82
### Streaming Inference
83
84
Subscribe to streaming updates for real-time results. This provides immediate access to streaming updates while the model processes the request.
85
86
```python { .api }
87
def subscribe(application: str, arguments: AnyJSON, *, path: str = "", hint: str | None = None, with_logs: bool = False, on_enqueue: callable[[str], None] | None = None, on_queue_update: callable[[Status], None] | None = None, priority: Priority | None = None) -> AnyJSON:
88
"""
89
Subscribe to streaming updates for an inference request.
90
91
Parameters:
92
- application: The fal.ai application ID
93
- arguments: Dictionary of arguments to pass to the model
94
- path: Optional subpath when applicable (default: "")
95
- hint: Optional runner hint for routing (default: None)
96
- with_logs: Include logs in status updates (default: False)
97
- on_enqueue: Callback function called when request is enqueued (default: None)
98
- on_queue_update: Callback function called on status updates (default: None)
99
- priority: Request priority ("normal" or "low", default: None)
100
101
Returns:
102
dict: The final inference result after streaming updates complete
103
"""
104
```
105
106
### Real-time Streaming
107
108
Stream inference results in real-time for models that support progressive output generation.
109
110
```python { .api }
111
def stream(application: str, arguments: AnyJSON, *, path: str = "/stream", timeout: float | None = None) -> Iterator[dict[str, Any]]:
112
"""
113
Stream inference results in real-time.
114
115
Parameters:
116
- application: The fal.ai application ID
117
- arguments: Dictionary of arguments to pass to the model
118
- path: Stream endpoint path (default: "/stream")
119
- timeout: Request timeout in seconds (default: None)
120
121
Returns:
122
Iterator[dict]: Iterator of streaming results
123
"""
124
```
125
126
### Request Status Operations
127
128
Check status, retrieve results, and cancel requests using request IDs.
129
130
```python { .api }
131
def status(application: str, request_id: str, *, with_logs: bool = False) -> Status:
132
"""
133
Get the current status of a request.
134
135
Parameters:
136
- application: The fal.ai application ID
137
- request_id: The request ID to check
138
- with_logs: Include logs in the status response (default: False)
139
140
Returns:
141
Status: Current request status (Queued, InProgress, or Completed)
142
"""
143
144
def result(application: str, request_id: str) -> AnyJSON:
145
"""
146
Get the result of a completed request.
147
148
Parameters:
149
- application: The fal.ai application ID
150
- request_id: The request ID to retrieve results for
151
152
Returns:
153
dict: The inference result
154
"""
155
156
def cancel(application: str, request_id: str) -> None:
157
"""
158
Cancel a pending or in-progress request.
159
160
Parameters:
161
- application: The fal.ai application ID
162
- request_id: The request ID to cancel
163
"""
164
```
165
166
### File Upload Operations
167
168
Upload files to the fal.media CDN for use in model inference.
169
170
```python { .api }
171
def upload(data: bytes | str, content_type: str) -> str:
172
"""
173
Upload binary data to fal.media CDN.
174
175
Parameters:
176
- data: The data to upload (bytes or string)
177
- content_type: MIME type of the data
178
179
Returns:
180
str: URL of the uploaded file on fal.media CDN
181
"""
182
183
def upload_file(path: PathLike) -> str:
184
"""
185
Upload a file from the filesystem to fal.media CDN.
186
187
Parameters:
188
- path: Path to the file to upload
189
190
Returns:
191
str: URL of the uploaded file on fal.media CDN
192
"""
193
194
def upload_image(image: "Image.Image", format: str = "jpeg") -> str:
195
"""
196
Upload a PIL Image object to fal.media CDN.
197
198
Parameters:
199
- image: PIL Image object to upload
200
- format: Image format for upload (default: "jpeg")
201
202
Returns:
203
str: URL of the uploaded image on fal.media CDN
204
"""
205
```
206
207
Usage example:
208
```python
209
import fal_client
210
211
# Upload a file and use in inference
212
audio_url = fal_client.upload_file("path/to/audio.wav")
213
response = fal_client.run(
214
"fal-ai/whisper",
215
arguments={"audio_url": audio_url}
216
)
217
print(response["text"])
218
```