or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

command-line-tools.mdconfiguration.mddogstatsd-client.mderror-handling.mdhttp-api-client.mdindex.mdthreadstats.md

dogstatsd-client.mddocs/

0

# DogStatsD Client

1

2

High-performance StatsD client for submitting metrics, events, and service checks to DogStatsD. Supports multiple transport protocols (UDP, Unix Domain Sockets), buffering, aggregation, and automatic telemetry injection for efficient real-time monitoring.

3

4

## Capabilities

5

6

### Metrics Submission

7

8

Submit various metric types with tags, sampling, and timing information for comprehensive application monitoring.

9

10

```python { .api }

11

class DogStatsd:

12

def __init__(

13

self,

14

host="localhost",

15

port=8125,

16

max_buffer_size=None,

17

namespace=None,

18

constant_tags=None,

19

use_ms=False,

20

use_default_route=False,

21

socket_path=None,

22

default_sample_rate=1,

23

disable_telemetry=False,

24

telemetry_min_flush_interval=10,

25

max_buffer_len=0,

26

container_id=None,

27

origin_detection_enabled=True,

28

cardinality=None

29

):

30

"""

31

Initialize DogStatsD client.

32

33

Parameters:

34

- host (str): StatsD server hostname

35

- port (int): StatsD server port

36

- max_buffer_size (int): Maximum UDP packet size in bytes

37

- namespace (str): Prefix for all metric names

38

- constant_tags (list): Tags applied to all metrics

39

- use_ms (bool): Use milliseconds for timing metrics

40

- use_default_route (bool): Dynamically set host to default route

41

- socket_path (str): Unix domain socket path (overrides host/port)

42

- default_sample_rate (float): Default sampling rate (0.0-1.0)

43

- disable_telemetry (bool): Disable client telemetry

44

- telemetry_min_flush_interval (int): Minimum telemetry flush interval

45

- max_buffer_len (int): Maximum buffer length before flush

46

- container_id (str): Container ID for origin detection

47

- origin_detection_enabled (bool): Enable origin detection

48

- cardinality (str): Cardinality level for metrics

49

"""

50

51

def gauge(self, metric, value, tags=None, sample_rate=None, cardinality=None):

52

"""

53

Submit gauge metric (current value).

54

55

Parameters:

56

- metric (str): Metric name

57

- value (float): Current value

58

- tags (list): List of tags in "key:value" format

59

- sample_rate (float): Sampling rate (0.0-1.0, default: None)

60

- cardinality (str): Cardinality level override

61

"""

62

63

def gauge_with_timestamp(self, metric, value, timestamp, tags=None, sample_rate=None, cardinality=None):

64

"""

65

Submit gauge metric with explicit timestamp.

66

67

Parameters:

68

- metric (str): Metric name

69

- value (float): Current value

70

- timestamp (int): Unix timestamp in seconds

71

- tags (list): List of tags in "key:value" format

72

- sample_rate (float): Sampling rate (0.0-1.0, default: None)

73

- cardinality (str): Cardinality level override

74

"""

75

76

def increment(self, metric, value=1, tags=None, sample_rate=None, cardinality=None):

77

"""

78

Increment counter metric.

79

80

Parameters:

81

- metric (str): Metric name

82

- value (int): Increment amount (default: 1)

83

- tags (list): List of tags

84

- sample_rate (float): Sampling rate (0.0-1.0, default: None)

85

- cardinality (str): Cardinality level override

86

"""

87

88

def decrement(self, metric, value=1, tags=None, sample_rate=None, cardinality=None):

89

"""

90

Decrement counter metric.

91

92

Parameters:

93

- metric (str): Metric name

94

- value (int): Decrement amount (default: 1)

95

- tags (list): List of tags

96

- sample_rate (float): Sampling rate (0.0-1.0, default: None)

97

- cardinality (str): Cardinality level override

98

"""

99

100

def count(self, metric, value, tags=None, sample_rate=None, cardinality=None):

101

"""

102

Submit count metric (aggregated over flush interval).

103

104

Parameters:

105

- metric (str): Metric name

106

- value (int): Count value

107

- tags (list): List of tags

108

- sample_rate (float): Sampling rate (0.0-1.0, default: None)

109

- cardinality (str): Cardinality level override

110

"""

111

112

def histogram(self, metric, value, tags=None, sample_rate=None, cardinality=None):

113

"""

114

Submit histogram metric for statistical analysis.

115

116

Parameters:

117

- metric (str): Metric name

118

- value (float): Value to add to histogram

119

- tags (list): List of tags

120

- sample_rate (float): Sampling rate (0.0-1.0, default: None)

121

- cardinality (str): Cardinality level override

122

"""

123

124

def distribution(self, metric, value, tags=None, sample_rate=None, cardinality=None):

125

"""

126

Submit distribution metric for global statistical analysis.

127

128

Parameters:

129

- metric (str): Metric name

130

- value (float): Value to add to distribution

131

- tags (list): List of tags

132

- sample_rate (float): Sampling rate (0.0-1.0, default: None)

133

- cardinality (str): Cardinality level override

134

"""

135

136

def timing(self, metric, value, tags=None, sample_rate=None, cardinality=None):

137

"""

138

Submit timing metric in milliseconds.

139

140

Parameters:

141

- metric (str): Metric name

142

- value (float): Time duration in milliseconds

143

- tags (list): List of tags

144

- sample_rate (float): Sampling rate (0.0-1.0, default: None)

145

- cardinality (str): Cardinality level override

146

"""

147

148

def set(self, metric, value, tags=None, sample_rate=None, cardinality=None):

149

"""

150

Submit set metric (count unique values).

151

152

Parameters:

153

- metric (str): Metric name

154

- value (str): Unique value to count

155

- tags (list): List of tags

156

- sample_rate (float): Sampling rate

157

"""

158

```

159

160

### Events and Service Checks

161

162

Submit custom events and service health status for monitoring application state and significant occurrences.

163

164

```python { .api }

165

class DogStatsd:

166

def event(

167

self,

168

title,

169

text,

170

alert_type="info",

171

aggregation_key=None,

172

source_type_name=None,

173

date_happened=None,

174

priority="normal",

175

tags=None,

176

hostname=None

177

):

178

"""

179

Submit custom event.

180

181

Parameters:

182

- title (str): Event title

183

- text (str): Event description

184

- alert_type (str): 'error', 'warning', 'info', or 'success'

185

- aggregation_key (str): Key for grouping related events

186

- source_type_name (str): Source type (e.g., 'my_app')

187

- date_happened (int): Unix timestamp when event occurred

188

- priority (str): 'normal' or 'low'

189

- tags (list): List of tags

190

- hostname (str): Host name for the event

191

"""

192

193

def service_check(

194

self,

195

check_name,

196

status,

197

tags=None,

198

timestamp=None,

199

hostname=None,

200

message=None

201

):

202

"""

203

Submit service check status.

204

205

Parameters:

206

- check_name (str): Name of the service check

207

- status (int): Check status (0=OK, 1=WARNING, 2=CRITICAL, 3=UNKNOWN)

208

- tags (list): List of tags

209

- timestamp (int): Unix timestamp for the check time

210

- hostname (str): Host name for the check

211

- message (str): Additional message for the check

212

"""

213

```

214

215

### Decorators and Context Managers

216

217

Use decorators and context managers for automatic timing and distribution measurement.

218

219

```python { .api }

220

class DogStatsd:

221

def timed(self, metric=None, tags=None, sample_rate=1, use_ms=None):

222

"""

223

Timing decorator for measuring function execution time.

224

225

Parameters:

226

- metric (str): Metric name (defaults to function name)

227

- tags (list): List of tags

228

- sample_rate (float): Sampling rate

229

- use_ms (bool): Use milliseconds (overrides client setting)

230

231

Returns:

232

Decorator function

233

234

Usage:

235

@statsd.timed('my_function.duration')

236

def my_function():

237

pass

238

"""

239

240

def distributed(self, metric=None, tags=None, sample_rate=1):

241

"""

242

Distribution decorator for measuring function execution time.

243

244

Parameters:

245

- metric (str): Metric name (defaults to function name)

246

- tags (list): List of tags

247

- sample_rate (float): Sampling rate

248

249

Returns:

250

Decorator function

251

252

Usage:

253

@statsd.distributed('my_function.duration')

254

def my_function():

255

pass

256

"""

257

```

258

259

### Connection and Buffer Management

260

261

Control client behavior, buffer management, and connection handling for optimal performance.

262

263

```python { .api }

264

class DogStatsd:

265

def flush(self):

266

"""

267

Manually flush buffered metrics to StatsD server.

268

"""

269

270

def close_socket(self):

271

"""

272

Close the UDP socket connection.

273

"""

274

275

def enable_aggregation(self, flush_interval=0.3, max_samples_per_context=0):

276

"""

277

Enable client-side metric aggregation.

278

279

Parameters:

280

- flush_interval (float): Aggregation flush interval in seconds

281

- max_samples_per_context (int): Max samples per metric context

282

"""

283

284

def disable_aggregation(self):

285

"""

286

Disable client-side metric aggregation.

287

"""

288

289

def enable_background_sender(self):

290

"""

291

Enable background thread for metric sending.

292

"""

293

294

def disable_background_sender(self):

295

"""

296

Disable background thread sending.

297

"""

298

299

def wait_for_pending(self):

300

"""

301

Wait for all pending metrics to be sent.

302

"""

303

```

304

305

### Global StatsD Instance

306

307

Pre-configured global instance for immediate use without initialization.

308

309

```python { .api }

310

# Global statsd instance

311

statsd = DogStatsd()

312

313

# Use directly without initialization

314

statsd.increment('web.requests')

315

statsd.gauge('system.memory.usage', 75.2)

316

statsd.timing('db.query.time', 142)

317

```

318

319

## Usage Examples

320

321

### Basic Metrics Submission

322

323

```python

324

from datadog import initialize, statsd

325

326

# Initialize Datadog (configures global statsd instance)

327

initialize(

328

statsd_host='localhost',

329

statsd_port=8125,

330

statsd_constant_tags=['env:production', 'service:web']

331

)

332

333

# Submit various metrics

334

statsd.increment('web.requests', tags=['endpoint:/api/users'])

335

statsd.gauge('system.cpu.usage', 75.5, tags=['host:web01'])

336

statsd.histogram('response.time', 245.7, tags=['endpoint:/api/users'])

337

statsd.timing('db.query.duration', 89, tags=['table:users', 'operation:select'])

338

339

# Submit event

340

statsd.event(

341

'Deployment completed',

342

'Version 1.2.3 deployed successfully',

343

alert_type='success',

344

tags=['version:1.2.3', 'env:production']

345

)

346

347

# Submit service check

348

statsd.service_check(

349

'database.connection',

350

0, # OK status

351

tags=['db:postgresql', 'host:db01'],

352

message='Database connection healthy'

353

)

354

```

355

356

### Custom Client Configuration

357

358

```python

359

from datadog.dogstatsd import DogStatsd

360

361

# Create custom client with specific configuration

362

custom_statsd = DogStatsd(

363

host='statsd.internal.com',

364

port=8125,

365

namespace='myapp',

366

constant_tags=['service:api', 'version:1.0'],

367

max_buffer_size=1024,

368

default_sample_rate=0.1, # Sample 10% of metrics

369

use_ms=True # Use milliseconds for timing

370

)

371

372

# All metrics will be prefixed with 'myapp.' and include constant tags

373

custom_statsd.increment('requests.count') # Sends: myapp.requests.count

374

custom_statsd.timing('request.duration', 250) # Value in milliseconds

375

```

376

377

### Using Decorators for Automatic Timing

378

379

```python

380

from datadog import statsd

381

382

# Time function execution automatically

383

@statsd.timed('function.process_data.duration', tags=['version:v2'])

384

def process_data(data):

385

# Function implementation

386

time.sleep(0.1) # Simulated work

387

return len(data)

388

389

# Use distribution for more detailed statistics

390

@statsd.distributed('function.calculate.time')

391

def expensive_calculation(x, y):

392

# Complex calculation

393

result = sum(i * x * y for i in range(1000))

394

return result

395

396

# Function calls automatically submit timing metrics

397

result = process_data([1, 2, 3, 4, 5])

398

calc_result = expensive_calculation(10, 20)

399

```

400

401

### Context Manager for Code Block Timing

402

403

```python

404

import time

405

from datadog import statsd

406

407

# Time a code block

408

with statsd.timed('database.backup.duration', tags=['type:full']):

409

# Backup operations

410

time.sleep(5) # Simulated backup time

411

print("Backup completed")

412

413

# The timing metric is automatically submitted when exiting the context

414

```

415

416

### Advanced Configuration with Aggregation

417

418

```python

419

from datadog.dogstatsd import DogStatsd

420

421

# Create client with aggregation enabled

422

aggregated_statsd = DogStatsd(

423

host='localhost',

424

port=8125,

425

max_buffer_len=50, # Buffer up to 50 metrics

426

disable_telemetry=False

427

)

428

429

# Enable aggregation for high-throughput scenarios

430

aggregated_statsd.enable_aggregation(

431

flush_interval=1.0, # Flush every second

432

max_samples_per_context=100 # Max 100 samples per metric

433

)

434

435

# Submit many metrics rapidly - they'll be aggregated

436

for i in range(1000):

437

aggregated_statsd.increment('high_volume.counter', tags=[f'iteration:{i%10}'])

438

aggregated_statsd.gauge('random.value', random.random() * 100)

439

440

# Manually flush if needed

441

aggregated_statsd.flush()

442

443

# Clean shutdown

444

aggregated_statsd.wait_for_pending()

445

aggregated_statsd.close_socket()

446

```

447

448

### Unix Domain Socket Configuration

449

450

```python

451

from datadog.dogstatsd import DogStatsd

452

453

# Use Unix Domain Socket for better performance

454

socket_statsd = DogStatsd(

455

socket_path='/var/run/datadog/dsd.socket',

456

constant_tags=['transport:uds']

457

)

458

459

# Submit metrics via UDS

460

socket_statsd.increment('uds.test.counter')

461

socket_statsd.gauge('uds.test.gauge', 42.0)

462

```

463

464

### Error Handling and Reliability

465

466

```python

467

from datadog import statsd

468

import logging

469

470

# StatsD operations are fire-and-forget by design

471

# but you can add error handling for critical metrics

472

473

def safe_metric_submit():

474

try:

475

statsd.increment('critical.business.metric')

476

statsd.gauge('critical.system.health', get_health_score())

477

except Exception as e:

478

# Log error but don't block application

479

logging.warning(f"Failed to submit metrics: {e}")

480

481

# Application continues regardless of metric submission success

482

483

# Handle sampling for high-volume metrics

484

def submit_high_volume_metric(value):

485

# Only submit 1% of metrics to reduce load

486

statsd.histogram('high_volume.metric', value, sample_rate=0.01)

487

488

# The sample_rate tells StatsD to multiply the received values

489

# to estimate the true volume

490

```

491

492

## Best Practices

493

494

### Metric Naming Conventions

495

496

```python

497

# Good: Use hierarchical naming with dots

498

statsd.increment('web.requests.success')

499

statsd.increment('web.requests.error')

500

statsd.gauge('system.memory.usage')

501

statsd.timing('database.query.users.select')

502

503

# Avoid: Inconsistent or flat naming

504

statsd.increment('web_success') # Inconsistent separator

505

statsd.increment('requests') # Too generic

506

```

507

508

### Effective Tagging Strategy

509

510

```python

511

# Good: Use tags for dimensions, not metric names

512

statsd.increment('web.requests', tags=[

513

'endpoint:/api/users',

514

'method:GET',

515

'status:200',

516

'region:us-east-1'

517

])

518

519

# Avoid: Encoding dimensions in metric names

520

statsd.increment('web.requests.api.users.GET.200.us_east_1') # Creates many metrics

521

```

522

523

### Sampling for High-Volume Metrics

524

525

```python

526

# Use sampling for metrics that fire very frequently

527

statsd.increment('trace.span.created', sample_rate=0.1) # Sample 10%

528

statsd.timing('cache.access.time', duration, sample_rate=0.05) # Sample 5%

529

530

# Don't sample critical business metrics

531

statsd.increment('payment.processed') # Always submit (sample_rate=1.0)

532

statsd.gauge('service.health.score', score) # Always submit

533

```