0
# Metrics Export
1
2
Performance counters and custom metrics collection with automatic standard metrics for comprehensive system monitoring. The metrics exporter provides both custom metric tracking and built-in standard metrics for CPU, memory, and request performance.
3
4
## Capabilities
5
6
### Metrics Exporter
7
8
Core metrics exporter that sends OpenCensus metrics to Azure Monitor as performance counters and custom metrics.
9
10
```python { .api }
11
class MetricsExporter(TransportMixin, ProcessorMixin):
12
"""
13
Metrics exporter for Microsoft Azure Monitor.
14
15
Exports OpenCensus metrics as Azure Monitor performance counters,
16
supporting both custom metrics and standard system metrics.
17
"""
18
19
def __init__(self, is_stats=False, **options):
20
"""
21
Initialize the metrics exporter.
22
23
Args:
24
is_stats (bool): Whether this is a statsbeat exporter (internal use)
25
**options: Configuration options including connection_string,
26
instrumentation_key, export_interval, etc.
27
"""
28
29
def export_metrics(self, metrics):
30
"""
31
Export a batch of metrics to Azure Monitor.
32
33
Args:
34
metrics (list): List of Metric objects to export
35
"""
36
37
def metric_to_envelopes(self, metric):
38
"""
39
Convert a metric to Azure Monitor telemetry envelopes.
40
41
Args:
42
metric (Metric): OpenCensus metric object
43
44
Returns:
45
list: List of Azure Monitor metric envelopes
46
"""
47
48
def shutdown(self):
49
"""
50
Shutdown the exporter and clean up resources.
51
52
Stops background threads and flushes any pending metrics.
53
"""
54
55
def add_telemetry_processor(self, processor):
56
"""
57
Add a telemetry processor for filtering/modifying telemetry.
58
59
Args:
60
processor (callable): Function that takes and returns envelope
61
"""
62
```
63
64
### Metrics Exporter Factory
65
66
Convenient factory function that creates a fully configured metrics exporter with background collection and standard metrics.
67
68
```python { .api }
69
def new_metrics_exporter(**options):
70
"""
71
Create a new metrics exporter with background collection thread.
72
73
This factory function creates a MetricsExporter instance and configures
74
it with a background thread for automatic metric collection and export.
75
Standard system metrics are enabled by default.
76
77
Args:
78
**options: Configuration options passed to MetricsExporter
79
80
Returns:
81
MetricsExporter: Configured exporter with active background thread
82
"""
83
```
84
85
#### Basic Usage Example
86
87
```python
88
from opencensus.ext.azure.metrics_exporter import new_metrics_exporter
89
90
# Create exporter with standard metrics enabled
91
exporter = new_metrics_exporter(
92
connection_string="InstrumentationKey=your-instrumentation-key",
93
export_interval=30.0, # Export every 30 seconds
94
enable_standard_metrics=True
95
)
96
97
# Standard metrics are automatically collected and exported
98
# No additional code needed for CPU, memory, request metrics
99
```
100
101
#### Custom Metrics Example
102
103
```python
104
from opencensus.ext.azure.metrics_exporter import new_metrics_exporter
105
from opencensus.stats import aggregation as aggregation_module
106
from opencensus.stats import measure as measure_module
107
from opencensus.stats import stats as stats_module
108
from opencensus.stats import view as view_module
109
from opencensus.tags import tag_map as tag_map_module
110
111
# Create exporter
112
exporter = new_metrics_exporter(
113
connection_string="InstrumentationKey=your-instrumentation-key"
114
)
115
116
# Define custom measures
117
request_count_measure = measure_module.MeasureInt(
118
"request_count", "Number of requests", "1")
119
120
request_latency_measure = measure_module.MeasureFloat(
121
"request_latency", "Request latency", "ms")
122
123
# Define views (how metrics are aggregated)
124
request_count_view = view_module.View(
125
"request_count_view",
126
"Number of requests by endpoint",
127
["endpoint", "method"],
128
request_count_measure,
129
aggregation_module.CountAggregation()
130
)
131
132
request_latency_view = view_module.View(
133
"request_latency_view",
134
"Request latency distribution",
135
["endpoint"],
136
request_latency_measure,
137
aggregation_module.DistributionAggregation([10, 50, 100, 500, 1000])
138
)
139
140
# Register views
141
stats_recorder = stats_module.stats.stats_recorder
142
view_manager = stats_module.stats.view_manager
143
view_manager.register_view(request_count_view)
144
view_manager.register_view(request_latency_view)
145
146
# Record metrics in your application
147
def handle_request(endpoint, method):
148
# Create tag map for dimensions
149
tag_map = tag_map_module.TagMap()
150
tag_map.insert("endpoint", endpoint)
151
tag_map.insert("method", method)
152
153
# Record request count
154
stats_recorder.new_measurement_map().measure_int_put(
155
request_count_measure, 1).record(tag_map)
156
157
# Measure request latency
158
start_time = time.time()
159
# ... handle request ...
160
latency = (time.time() - start_time) * 1000
161
162
tag_map_latency = tag_map_module.TagMap()
163
tag_map_latency.insert("endpoint", endpoint)
164
stats_recorder.new_measurement_map().measure_float_put(
165
request_latency_measure, latency).record(tag_map_latency)
166
```
167
168
## Standard Metrics
169
170
When `enable_standard_metrics=True`, the following system metrics are automatically collected:
171
172
### Azure Standard Metrics Producer
173
174
```python { .api }
175
class AzureStandardMetricsProducer(MetricProducer):
176
"""
177
Producer for Azure standard metrics.
178
179
Automatically collects standard system performance metrics
180
including CPU usage, memory consumption, and request statistics.
181
"""
182
183
def get_metrics(self):
184
"""
185
Get current standard metrics.
186
187
Returns:
188
list: List of standard metric objects
189
"""
190
191
def register_metrics():
192
"""
193
Register all standard metrics with OpenCensus.
194
195
Returns:
196
Registry: Registry instance with standard metrics registered
197
"""
198
```
199
200
### Available Standard Metrics
201
202
```python { .api }
203
class ProcessorTimeMetric:
204
"""Processor time percentage metric."""
205
206
class RequestsAvgExecutionMetric:
207
"""Average request execution time metric."""
208
209
class RequestsRateMetric:
210
"""Request rate (requests per second) metric."""
211
212
class AvailableMemoryMetric:
213
"""Available system memory metric."""
214
215
class ProcessCPUMetric:
216
"""Process CPU usage percentage metric."""
217
218
class ProcessMemoryMetric:
219
"""Process memory usage metric."""
220
```
221
222
These metrics are automatically collected at the configured export interval and provide:
223
224
- **\Processor(_Total)\% Processor Time**: Overall CPU usage percentage
225
- **\Memory\Available Bytes**: Available system memory in bytes
226
- **\Process(python)\% Processor Time**: Python process CPU usage
227
- **\Process(python)\Private Bytes**: Python process memory usage
228
- **\ASP.NET Applications(__Total__)\Requests/Sec**: Request rate
229
- **\ASP.NET Applications(__Total__)\Request Execution Time**: Average request duration
230
231
## Configuration Options
232
233
Metrics exporter supports these specific options in addition to common options:
234
235
- `enable_standard_metrics` (bool): Enable automatic standard metrics collection (default: True)
236
- `is_stats` (bool): Internal flag for statsbeat metrics (default: False)
237
- Export interval from common options controls both custom and standard metrics
238
239
## Metric Types and Aggregations
240
241
The exporter supports these OpenCensus metric types:
242
243
- **Count**: Simple counters that only increase
244
- **Gauge**: Point-in-time values that can increase or decrease
245
- **Distribution**: Histogram distributions with configurable buckets
246
- **LastValue**: Most recent value recorded
247
248
Note: Histogram/Distribution aggregations are not currently supported and will be skipped.
249
250
## Performance Considerations
251
252
- **Background Collection**: Standard metrics run in a separate thread
253
- **Batching**: Multiple metrics are sent in single HTTP requests
254
- **Efficient Sampling**: Standard metrics use efficient system APIs
255
- **Configurable Intervals**: Balance between freshness and overhead
256
257
## Advanced Usage
258
259
### Custom Telemetry Processors
260
261
```python
262
def custom_metric_processor(envelope):
263
"""Filter out noisy metrics or add custom properties."""
264
if envelope.data.baseData.metrics[0].name == "noisy_metric":
265
return None # Drop this metric
266
267
# Add custom properties
268
envelope.data.baseData.properties["environment"] = "production"
269
return envelope
270
271
exporter = new_metrics_exporter(
272
connection_string="InstrumentationKey=your-key-here"
273
)
274
exporter.add_telemetry_processor(custom_metric_processor)
275
```
276
277
### Integration with Application Frameworks
278
279
```python
280
# Flask integration example
281
from flask import Flask, g
282
import time
283
284
app = Flask(__name__)
285
286
# Set up metrics
287
exporter = new_metrics_exporter(
288
connection_string="InstrumentationKey=your-key-here"
289
)
290
291
@app.before_request
292
def before_request():
293
g.start_time = time.time()
294
295
@app.after_request
296
def after_request(response):
297
# Record request metrics
298
duration = (time.time() - g.start_time) * 1000
299
300
tag_map = tag_map_module.TagMap()
301
tag_map.insert("endpoint", request.endpoint or "unknown")
302
tag_map.insert("method", request.method)
303
tag_map.insert("status_code", str(response.status_code))
304
305
# Record count and latency
306
stats_recorder.new_measurement_map().measure_int_put(
307
request_count_measure, 1).record(tag_map)
308
stats_recorder.new_measurement_map().measure_float_put(
309
request_latency_measure, duration).record(tag_map)
310
311
return response
312
```
313
314
### Graceful Shutdown
315
316
```python
317
import atexit
318
319
exporter = new_metrics_exporter(
320
connection_string="InstrumentationKey=your-key-here"
321
)
322
323
# Ensure clean shutdown
324
def cleanup():
325
exporter.shutdown()
326
327
atexit.register(cleanup)
328
```
329
330
## Monitoring and Troubleshooting
331
332
- **Local Storage**: Enable for reliability during network issues
333
- **Batch Size**: Adjust `max_batch_size` for high-volume scenarios
334
- **Export Interval**: Balance between real-time visibility and performance
335
- **Telemetry Processors**: Use for filtering, enrichment, and debugging