0
# Monitoring & Integration
1
2
Prometheus metrics collection, Sentry integration, and Django-specific monitoring features for comprehensive observability and error tracking.
3
4
## Capabilities
5
6
### Prometheus Integration
7
8
Collect and expose RQ metrics for Prometheus monitoring.
9
10
```python { .api }
11
from django_rq.contrib.prometheus import RQCollector
12
13
class RQCollector:
14
"""
15
Prometheus metrics collector for RQ statistics.
16
17
Metrics exposed:
18
- rq_workers: Number of workers by queue and state
19
- rq_jobs: Job counts by queue and status
20
- rq_job_successful_total: Total successful jobs by worker
21
- rq_job_failed_total: Total failed jobs by worker
22
- rq_working_seconds_total: Total working time by worker
23
"""
24
25
def collect(self):
26
"""
27
Collect current RQ metrics.
28
29
Returns:
30
Generator: Prometheus metric families
31
"""
32
```
33
34
#### Metrics Endpoint
35
36
Access Prometheus metrics via HTTP endpoint.
37
38
```python { .api }
39
def prometheus_metrics(request):
40
"""
41
Prometheus metrics endpoint.
42
43
Authentication:
44
- Django staff user session
45
- Bearer token authentication
46
47
Returns:
48
HttpResponse: Prometheus format metrics
49
50
URL: /django-rq/metrics/
51
"""
52
```
53
54
Usage example:
55
56
```python
57
# Install prometheus support
58
# pip install django-rq[prometheus]
59
60
# settings.py - Enable metrics collection
61
INSTALLED_APPS = [
62
'django_rq',
63
# ... other apps
64
]
65
66
# Access metrics
67
# GET /django-rq/metrics/
68
# Authorization: Bearer your-api-token
69
```
70
71
#### Prometheus Configuration
72
73
Configure Prometheus to scrape Django-RQ metrics:
74
75
```yaml
76
# prometheus.yml
77
scrape_configs:
78
- job_name: 'django-rq'
79
static_configs:
80
- targets: ['localhost:8000']
81
metrics_path: '/django-rq/metrics/'
82
bearer_token: 'your-api-token'
83
scrape_interval: 30s
84
```
85
86
### Sentry Integration
87
88
Configure Sentry for error tracking and monitoring.
89
90
```python { .api }
91
def configure_sentry(sentry_dsn, **options):
92
"""
93
Configure Sentry client for RQ workers.
94
95
Args:
96
sentry_dsn: Sentry DSN URL
97
**options: Additional Sentry configuration options
98
99
Options:
100
sentry_debug: Enable debug mode
101
sentry_ca_certs: Path to CA certificates
102
103
Integrations:
104
- RedisIntegration
105
- RqIntegration
106
- DjangoIntegration
107
"""
108
```
109
110
#### Worker Sentry Configuration
111
112
Configure Sentry for RQ workers via command line:
113
114
```bash
115
# Override Django Sentry configuration for workers
116
python manage.py rqworker --sentry-dsn=https://key@sentry.io/project
117
118
# With additional options
119
python manage.py rqworker \
120
--sentry-dsn=https://key@sentry.io/project \
121
--sentry-debug \
122
--sentry-ca-certs=/path/to/certs
123
```
124
125
#### Django Sentry Configuration
126
127
Configure Sentry in Django settings for automatic integration:
128
129
```python
130
# settings.py
131
import sentry_sdk
132
from sentry_sdk.integrations.django import DjangoIntegration
133
from sentry_sdk.integrations.rq import RqIntegration
134
from sentry_sdk.integrations.redis import RedisIntegration
135
136
sentry_sdk.init(
137
dsn="https://key@sentry.io/project",
138
integrations=[
139
DjangoIntegration(),
140
RqIntegration(),
141
RedisIntegration(),
142
],
143
traces_sample_rate=1.0,
144
send_default_pii=True
145
)
146
```
147
148
### Statistics Collection
149
150
Comprehensive statistics collection for monitoring and analysis.
151
152
```python { .api }
153
from django_rq.utils import get_statistics, get_scheduler_statistics
154
155
def get_statistics(run_maintenance_tasks=False):
156
"""
157
Get comprehensive queue and worker statistics.
158
159
Note: This function is in the utils module and must be imported directly.
160
161
Args:
162
run_maintenance_tasks: Whether to run cleanup tasks
163
164
Returns:
165
dict: Statistics including:
166
- queues: List of queue statistics
167
- workers: Worker counts and details
168
- jobs: Job counts by status
169
- connections: Redis connection info
170
"""
171
172
def get_scheduler_statistics():
173
"""
174
Get scheduler statistics across all Redis connections.
175
176
Note: This function is in the utils module and must be imported directly.
177
178
Returns:
179
dict: Scheduler statistics including:
180
- schedulers: Scheduler status by connection
181
- scheduled_jobs: Count of scheduled jobs
182
"""
183
```
184
185
#### Statistics Format
186
187
Statistics are returned in a structured format:
188
189
```python
190
{
191
"queues": [
192
{
193
"name": "default",
194
"jobs": 10,
195
"workers": 2,
196
"finished_jobs": 100,
197
"failed_jobs": 5,
198
"started_jobs": 1,
199
"deferred_jobs": 0,
200
"scheduled_jobs": 3,
201
"oldest_job_timestamp": "2024-01-01 12:00:00",
202
"connection_kwargs": {...},
203
"scheduler_pid": 1234
204
}
205
],
206
"schedulers": {
207
"localhost:6379/0": {
208
"count": 5,
209
"index": 0
210
}
211
}
212
}
213
```
214
215
### Logging Integration
216
217
Configure RQ logging to integrate with Django's logging system.
218
219
```python { .api }
220
# settings.py
221
LOGGING = {
222
'version': 1,
223
'disable_existing_loggers': False,
224
'formatters': {
225
'rq_console': {
226
'format': '%(asctime)s %(message)s',
227
'datefmt': '%H:%M:%S',
228
},
229
},
230
'handlers': {
231
'rq_console': {
232
'level': 'DEBUG',
233
'class': 'rq.logutils.ColorizingStreamHandler',
234
'formatter': 'rq_console',
235
'exclude': ['%(asctime)s'],
236
},
237
'file': {
238
'level': 'INFO',
239
'class': 'logging.FileHandler',
240
'filename': '/var/log/django-rq.log',
241
},
242
},
243
'loggers': {
244
'rq.worker': {
245
'handlers': ['rq_console', 'file'],
246
'level': 'DEBUG'
247
},
248
}
249
}
250
```
251
252
### Custom Exception Handlers
253
254
Configure custom exception handlers for specialized error handling.
255
256
```python { .api }
257
def get_exception_handlers():
258
"""
259
Get custom exception handlers from settings.
260
261
Returns:
262
list: Exception handler functions
263
"""
264
265
# settings.py
266
RQ_EXCEPTION_HANDLERS = [
267
'myapp.handlers.custom_exception_handler',
268
'myapp.handlers.notification_handler',
269
]
270
271
# Custom handler example
272
def custom_exception_handler(job, exc_type, exc_value, traceback):
273
"""
274
Custom exception handler for RQ jobs.
275
276
Args:
277
job: Failed job instance
278
exc_type: Exception type
279
exc_value: Exception instance
280
traceback: Traceback object
281
"""
282
# Custom error processing
283
logger.error(f"Job {job.id} failed: {exc_value}")
284
# Send notifications, update databases, etc.
285
```
286
287
### Health Checks
288
289
Implement health checks for monitoring system status.
290
291
```python
292
# Health check utilities
293
from django_rq.utils import get_statistics
294
from django_rq import get_connection
295
296
def rq_health_check():
297
"""
298
Check RQ system health.
299
300
Returns:
301
dict: Health status information
302
"""
303
try:
304
# Check Redis connectivity
305
conn = get_connection('default')
306
conn.ping()
307
308
# Get queue statistics
309
stats = get_statistics()
310
311
# Check for stuck jobs
312
stuck_jobs = check_stuck_jobs()
313
314
return {
315
'status': 'healthy',
316
'redis_connected': True,
317
'active_workers': sum(q['workers'] for q in stats['queues']),
318
'total_jobs': sum(q['jobs'] for q in stats['queues']),
319
'stuck_jobs': stuck_jobs
320
}
321
except Exception as e:
322
return {
323
'status': 'unhealthy',
324
'error': str(e)
325
}
326
327
def check_stuck_jobs():
328
"""Check for jobs that may be stuck."""
329
# Implementation to detect stuck jobs
330
pass
331
```
332
333
### Performance Monitoring
334
335
Monitor RQ performance metrics and system resources.
336
337
```python
338
# Performance monitoring utilities
339
def monitor_queue_performance():
340
"""
341
Monitor queue processing performance.
342
343
Metrics:
344
- Job processing rate
345
- Average job duration
346
- Queue depth trends
347
- Worker utilization
348
"""
349
pass
350
351
def monitor_redis_performance():
352
"""
353
Monitor Redis performance for RQ.
354
355
Metrics:
356
- Memory usage
357
- Connection count
358
- Command latency
359
- Key expiration rates
360
"""
361
pass
362
```
363
364
### Integration Patterns
365
366
Common integration patterns for monitoring and observability.
367
368
#### Grafana Dashboard
369
370
Create Grafana dashboards using Prometheus metrics:
371
372
```json
373
{
374
"dashboard": {
375
"title": "Django-RQ Monitoring",
376
"panels": [
377
{
378
"title": "Queue Length",
379
"type": "graph",
380
"targets": [
381
{
382
"expr": "rq_jobs{status=\"queued\"}"
383
}
384
]
385
},
386
{
387
"title": "Worker Count",
388
"type": "stat",
389
"targets": [
390
{
391
"expr": "sum(rq_workers)"
392
}
393
]
394
}
395
]
396
}
397
}
398
```
399
400
#### Alerting Rules
401
402
Configure alerting based on RQ metrics:
403
404
```yaml
405
# alerting.yml
406
groups:
407
- name: django_rq
408
rules:
409
- alert: RQHighQueueDepth
410
expr: rq_jobs{status="queued"} > 100
411
for: 5m
412
labels:
413
severity: warning
414
annotations:
415
summary: "RQ queue depth is high"
416
417
- alert: RQNoWorkers
418
expr: sum(rq_workers) == 0
419
for: 1m
420
labels:
421
severity: critical
422
annotations:
423
summary: "No RQ workers are running"
424
```
425
426
#### Custom Metrics
427
428
Export custom metrics for specific monitoring needs:
429
430
```python
431
from prometheus_client import Counter, Histogram, Gauge
432
433
# Custom metrics
434
job_duration = Histogram('rq_job_duration_seconds', 'Job execution time')
435
custom_jobs = Counter('rq_custom_jobs_total', 'Custom job counter')
436
queue_age = Gauge('rq_queue_age_seconds', 'Age of oldest job in queue')
437
438
# Use in job functions
439
@job_duration.time()
440
def monitored_job():
441
custom_jobs.inc()
442
# Job implementation
443
```
444
445
## Configuration
446
447
Configure monitoring and integration features:
448
449
```python
450
# settings.py
451
# API token for metrics access
452
RQ_API_TOKEN = 'your-secure-token'
453
454
# Exception handlers
455
RQ_EXCEPTION_HANDLERS = [
456
'myapp.handlers.sentry_handler',
457
'myapp.handlers.custom_handler',
458
]
459
460
# Enable admin link
461
RQ_SHOW_ADMIN_LINK = True
462
463
# Prometheus collector (automatic if prometheus_client installed)
464
# pip install django-rq[prometheus]
465
```
466
467
### Template Tags
468
469
Django template tags for displaying job information in templates.
470
471
```python { .api }
472
from django import template
473
474
register = template.Library()
475
476
@register.filter
477
def to_localtime(time):
478
"""
479
Convert UTC datetime to local timezone.
480
481
Args:
482
time: UTC datetime object
483
484
Returns:
485
datetime: Localized datetime
486
"""
487
488
@register.filter
489
def show_func_name(job):
490
"""
491
Safely display job function name.
492
493
Args:
494
job: RQ Job instance
495
496
Returns:
497
str: Function name or error representation
498
"""
499
500
@register.filter
501
def force_escape(text):
502
"""
503
HTML escape text content.
504
505
Args:
506
text: Text to escape
507
508
Returns:
509
str: HTML-escaped text
510
"""
511
512
@register.filter
513
def items(dictionary):
514
"""
515
Access dictionary items in templates.
516
517
Args:
518
dictionary: Dictionary object
519
520
Returns:
521
dict_items: Dictionary items
522
"""
523
```
524
525
Usage in templates:
526
527
```html
528
{% load django_rq %}
529
530
<!-- Display job function name safely -->
531
{{ job|show_func_name }}
532
533
<!-- Convert UTC time to local -->
534
{{ job.created_at|to_localtime }}
535
536
<!-- Escape user content -->
537
{{ user_input|force_escape }}
538
539
<!-- Iterate dictionary items -->
540
{% for key, value in stats|items %}
541
<p>{{ key }}: {{ value }}</p>
542
{% endfor %}
543
```