or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

account-security-analysis.mdassessment-operations.mdfirewall-policies.mdindex.mdip-override-management.mdkey-management.mdmetrics-analytics.md

metrics-analytics.mddocs/

0

# Metrics and Analytics

1

2

Comprehensive reporting and analytics for monitoring reCAPTCHA usage, effectiveness, performance metrics, score distributions, and challenge completion rates. These metrics help optimize reCAPTCHA configuration and measure protection effectiveness.

3

4

## Capabilities

5

6

### Get Metrics

7

8

Retrieves detailed metrics and analytics data for reCAPTCHA usage and performance within a specified time range.

9

10

```python { .api }

11

def get_metrics(

12

request: GetMetricsRequest = None,

13

*,

14

name: str = None,

15

retry: Union[retries.Retry, gapic_v1.method._MethodDefault] = _MethodDefault._DEFAULT_VALUE,

16

timeout: Union[float, object] = _MethodDefault._DEFAULT_VALUE,

17

metadata: Sequence[Tuple[str, str]] = ()

18

) -> Metrics:

19

"""

20

Get metrics for a specific key.

21

22

Args:

23

request: The request object for getting metrics

24

name: Required. The metrics resource name in format

25

'projects/{project}/keys/{key}/metrics'

26

retry: Retry configuration for the request

27

timeout: Timeout for the request in seconds

28

metadata: Additional metadata for the request

29

30

Returns:

31

Metrics: Comprehensive metrics data including usage, scores, and challenges

32

33

Raises:

34

google.api_core.exceptions.NotFound: If the key doesn't exist

35

google.api_core.exceptions.PermissionDenied: If insufficient permissions

36

google.api_core.exceptions.InvalidArgument: If request parameters are invalid

37

"""

38

```

39

40

#### Usage Example

41

42

```python

43

from google.cloud import recaptchaenterprise

44

45

client = recaptchaenterprise.RecaptchaEnterpriseServiceClient()

46

47

# Get metrics for a specific key

48

request = recaptchaenterprise.GetMetricsRequest(

49

name="projects/your-project-id/keys/your-key-id/metrics"

50

)

51

52

metrics = client.get_metrics(request=request)

53

54

print(f"Metrics for key: {request.name}")

55

print(f"Total assessments: {len(metrics.score_metrics) if metrics.score_metrics else 0}")

56

57

# Display score distribution

58

if metrics.score_metrics:

59

for score_metric in metrics.score_metrics:

60

print(f"Score range {score_metric.overall_metrics.score_buckets[0].lower_bound}-"

61

f"{score_metric.overall_metrics.score_buckets[0].upper_bound}: "

62

f"{score_metric.overall_metrics.score_buckets[0].count} assessments")

63

```

64

65

## Request and Response Types

66

67

### GetMetricsRequest

68

69

```python { .api }

70

class GetMetricsRequest:

71

"""Request message for retrieving metrics."""

72

name: str # Required. Metrics resource name in format

73

# 'projects/{project}/keys/{key}/metrics'

74

```

75

76

### Metrics

77

78

```python { .api }

79

class Metrics:

80

"""Comprehensive metrics data for reCAPTCHA usage."""

81

name: str # Output only. Resource name

82

start_time: Timestamp # Start time of the metrics period

83

score_metrics: List[ScoreMetrics] # Score-related metrics

84

challenge_metrics: List[ChallengeMetrics] # Challenge-related metrics

85

```

86

87

### Score Metrics

88

89

```python { .api }

90

class ScoreMetrics:

91

"""Metrics related to reCAPTCHA scores."""

92

overall_metrics: ScoreDistribution # Overall score distribution

93

action_metrics: Dict[str, ScoreDistribution] # Per-action score distributions

94

95

class ScoreDistribution:

96

"""Distribution of reCAPTCHA scores."""

97

score_buckets: List[ScoreBucket] # Score ranges and counts

98

99

class ScoreBucket:

100

"""A bucket representing a score range."""

101

lower_bound: float # Lower bound of score range (inclusive)

102

upper_bound: float # Upper bound of score range (exclusive)

103

count: int # Number of assessments in this range

104

```

105

106

### Challenge Metrics

107

108

```python { .api }

109

class ChallengeMetrics:

110

"""Metrics related to reCAPTCHA challenges."""

111

pageload_count: int # Number of pageloads with challenges

112

nocaptcha_count: int # Number of successful no-challenge verifications

113

failed_count: int # Number of failed challenge attempts

114

passed_count: int # Number of successful challenge completions

115

```

116

117

## Usage Examples

118

119

### Basic Metrics Retrieval

120

121

```python

122

def get_key_metrics(client, project_id, key_id):

123

"""Get and display basic metrics for a key."""

124

125

metrics_name = f"projects/{project_id}/keys/{key_id}/metrics"

126

request = recaptchaenterprise.GetMetricsRequest(name=metrics_name)

127

128

try:

129

metrics = client.get_metrics(request=request)

130

131

print(f"=== Metrics for Key {key_id} ===")

132

print(f"Period: {metrics.start_time}")

133

134

# Score metrics

135

if metrics.score_metrics:

136

print("\n--- Score Metrics ---")

137

for score_metric in metrics.score_metrics:

138

print("Overall score distribution:")

139

display_score_distribution(score_metric.overall_metrics)

140

141

if score_metric.action_metrics:

142

print("\nPer-action metrics:")

143

for action, distribution in score_metric.action_metrics.items():

144

print(f" Action '{action}':")

145

display_score_distribution(distribution, indent=" ")

146

147

# Challenge metrics

148

if metrics.challenge_metrics:

149

print("\n--- Challenge Metrics ---")

150

for challenge_metric in metrics.challenge_metrics:

151

total_challenges = (challenge_metric.pageload_count +

152

challenge_metric.nocaptcha_count +

153

challenge_metric.failed_count +

154

challenge_metric.passed_count)

155

156

print(f"Total challenge events: {total_challenges}")

157

print(f" Pageloads: {challenge_metric.pageload_count}")

158

print(f" No-challenge success: {challenge_metric.nocaptcha_count}")

159

print(f" Challenge passed: {challenge_metric.passed_count}")

160

print(f" Challenge failed: {challenge_metric.failed_count}")

161

162

if challenge_metric.passed_count + challenge_metric.failed_count > 0:

163

success_rate = (challenge_metric.passed_count /

164

(challenge_metric.passed_count + challenge_metric.failed_count)) * 100

165

print(f" Challenge success rate: {success_rate:.1f}%")

166

167

return metrics

168

169

except Exception as e:

170

print(f"Error retrieving metrics: {e}")

171

return None

172

173

def display_score_distribution(distribution, indent=""):

174

"""Display score distribution in a readable format."""

175

if not distribution.score_buckets:

176

print(f"{indent}No score data available")

177

return

178

179

total_assessments = sum(bucket.count for bucket in distribution.score_buckets)

180

print(f"{indent}Total assessments: {total_assessments}")

181

182

for bucket in distribution.score_buckets:

183

percentage = (bucket.count / total_assessments * 100) if total_assessments > 0 else 0

184

print(f"{indent} {bucket.lower_bound:.1f}-{bucket.upper_bound:.1f}: "

185

f"{bucket.count} ({percentage:.1f}%)")

186

187

# Get metrics for a key

188

metrics = get_key_metrics(client, "your-project-id", "your-key-id")

189

```

190

191

### Metrics Analysis and Alerting

192

193

```python

194

def analyze_metrics_for_alerts(client, project_id, key_id):

195

"""Analyze metrics and generate alerts for unusual patterns."""

196

197

metrics_name = f"projects/{project_id}/keys/{key_id}/metrics"

198

request = recaptchaenterprise.GetMetricsRequest(name=metrics_name)

199

200

try:

201

metrics = client.get_metrics(request=request)

202

alerts = []

203

204

# Analyze score distribution

205

if metrics.score_metrics:

206

for score_metric in metrics.score_metrics:

207

overall = score_metric.overall_metrics

208

209

if overall.score_buckets:

210

# Calculate percentage of low scores (potential attacks)

211

total_assessments = sum(bucket.count for bucket in overall.score_buckets)

212

low_score_count = sum(bucket.count for bucket in overall.score_buckets

213

if bucket.upper_bound <= 0.3)

214

215

if total_assessments > 0:

216

low_score_percentage = (low_score_count / total_assessments) * 100

217

218

if low_score_percentage > 20: # Alert if >20% low scores

219

alerts.append({

220

'type': 'HIGH_SUSPICIOUS_ACTIVITY',

221

'message': f'{low_score_percentage:.1f}% of assessments have low scores (<0.3)',

222

'severity': 'HIGH' if low_score_percentage > 50 else 'MEDIUM'

223

})

224

225

# Check for unusual patterns

226

high_score_count = sum(bucket.count for bucket in overall.score_buckets

227

if bucket.lower_bound >= 0.9)

228

high_score_percentage = (high_score_count / total_assessments) * 100

229

230

if high_score_percentage < 30: # Alert if <30% high scores

231

alerts.append({

232

'type': 'LOW_LEGITIMATE_ACTIVITY',

233

'message': f'Only {high_score_percentage:.1f}% of assessments have high scores (>=0.9)',

234

'severity': 'MEDIUM'

235

})

236

237

# Analyze challenge metrics

238

if metrics.challenge_metrics:

239

for challenge_metric in metrics.challenge_metrics:

240

total_challenges = (challenge_metric.passed_count + challenge_metric.failed_count)

241

242

if total_challenges > 0:

243

failure_rate = (challenge_metric.failed_count / total_challenges) * 100

244

245

if failure_rate > 50: # Alert if >50% challenge failures

246

alerts.append({

247

'type': 'HIGH_CHALLENGE_FAILURE_RATE',

248

'message': f'Challenge failure rate is {failure_rate:.1f}%',

249

'severity': 'HIGH' if failure_rate > 80 else 'MEDIUM'

250

})

251

252

# Check for unusual no-challenge rate

253

total_events = (challenge_metric.pageload_count + challenge_metric.nocaptcha_count +

254

challenge_metric.failed_count + challenge_metric.passed_count)

255

256

if total_events > 0:

257

nocaptcha_rate = (challenge_metric.nocaptcha_count / total_events) * 100

258

259

if nocaptcha_rate < 70: # Alert if <70% no-challenge

260

alerts.append({

261

'type': 'LOW_NOCAPTCHA_RATE',

262

'message': f'No-challenge rate is only {nocaptcha_rate:.1f}%',

263

'severity': 'MEDIUM'

264

})

265

266

# Report alerts

267

if alerts:

268

print(f"=== ALERTS for Key {key_id} ===")

269

for alert in alerts:

270

print(f"[{alert['severity']}] {alert['type']}: {alert['message']}")

271

else:

272

print(f"No alerts for key {key_id} - metrics look normal")

273

274

return alerts

275

276

except Exception as e:

277

print(f"Error analyzing metrics: {e}")

278

return []

279

280

# Analyze metrics for alerts

281

alerts = analyze_metrics_for_alerts(client, "your-project-id", "your-key-id")

282

```

283

284

### Multi-Key Metrics Comparison

285

286

```python

287

def compare_key_metrics(client, project_id, key_ids):

288

"""Compare metrics across multiple keys."""

289

290

key_metrics = {}

291

292

# Collect metrics for all keys

293

for key_id in key_ids:

294

metrics_name = f"projects/{project_id}/keys/{key_id}/metrics"

295

request = recaptchaenterprise.GetMetricsRequest(name=metrics_name)

296

297

try:

298

metrics = client.get_metrics(request=request)

299

key_metrics[key_id] = metrics

300

except Exception as e:

301

print(f"Error getting metrics for key {key_id}: {e}")

302

key_metrics[key_id] = None

303

304

# Compare key performance

305

print("=== Key Performance Comparison ===")

306

print(f"{'Key ID':<20} {'Total Assessments':<20} {'Avg Score':<12} {'Low Score %':<12}")

307

print("-" * 70)

308

309

for key_id, metrics in key_metrics.items():

310

if not metrics or not metrics.score_metrics:

311

print(f"{key_id:<20} {'No data':<20} {'N/A':<12} {'N/A':<12}")

312

continue

313

314

# Calculate statistics

315

overall_metrics = metrics.score_metrics[0].overall_metrics

316

total_assessments = sum(bucket.count for bucket in overall_metrics.score_buckets)

317

318

# Calculate weighted average score

319

total_weighted_score = sum(

320

bucket.count * ((bucket.lower_bound + bucket.upper_bound) / 2)

321

for bucket in overall_metrics.score_buckets

322

)

323

avg_score = total_weighted_score / total_assessments if total_assessments > 0 else 0

324

325

# Calculate low score percentage

326

low_score_count = sum(bucket.count for bucket in overall_metrics.score_buckets

327

if bucket.upper_bound <= 0.3)

328

low_score_pct = (low_score_count / total_assessments * 100) if total_assessments > 0 else 0

329

330

print(f"{key_id:<20} {total_assessments:<20} {avg_score:<12.2f} {low_score_pct:<12.1f}%")

331

332

# Find best and worst performing keys

333

valid_keys = {k: v for k, v in key_metrics.items() if v and v.score_metrics}

334

335

if valid_keys:

336

best_key = min(valid_keys.keys(), key=lambda k: calculate_low_score_percentage(valid_keys[k]))

337

worst_key = max(valid_keys.keys(), key=lambda k: calculate_low_score_percentage(valid_keys[k]))

338

339

print(f"\nBest performing key: {best_key}")

340

print(f"Worst performing key: {worst_key}")

341

342

def calculate_low_score_percentage(metrics):

343

"""Calculate percentage of low scores for a metrics object."""

344

if not metrics.score_metrics:

345

return 100 # Assume worst case if no data

346

347

overall_metrics = metrics.score_metrics[0].overall_metrics

348

total_assessments = sum(bucket.count for bucket in overall_metrics.score_buckets)

349

low_score_count = sum(bucket.count for bucket in overall_metrics.score_buckets

350

if bucket.upper_bound <= 0.3)

351

352

return (low_score_count / total_assessments * 100) if total_assessments > 0 else 100

353

354

# Compare metrics across keys

355

key_ids = ["web-key", "android-key", "ios-key"]

356

compare_key_metrics(client, "your-project-id", key_ids)

357

```

358

359

### Metrics Export and Reporting

360

361

```python

362

import json

363

from datetime import datetime

364

365

def export_metrics_to_json(client, project_id, key_id, output_file=None):

366

"""Export metrics to JSON format for external analysis."""

367

368

metrics_name = f"projects/{project_id}/keys/{key_id}/metrics"

369

request = recaptchaenterprise.GetMetricsRequest(name=metrics_name)

370

371

try:

372

metrics = client.get_metrics(request=request)

373

374

# Convert to serializable format

375

metrics_data = {

376

'key_id': key_id,

377

'export_time': datetime.utcnow().isoformat(),

378

'metrics_period_start': metrics.start_time.isoformat() if metrics.start_time else None,

379

'score_metrics': [],

380

'challenge_metrics': []

381

}

382

383

# Process score metrics

384

if metrics.score_metrics:

385

for score_metric in metrics.score_metrics:

386

score_data = {

387

'overall_distribution': [

388

{

389

'lower_bound': bucket.lower_bound,

390

'upper_bound': bucket.upper_bound,

391

'count': bucket.count

392

}

393

for bucket in score_metric.overall_metrics.score_buckets

394

],

395

'action_distributions': {}

396

}

397

398

# Process per-action metrics

399

if score_metric.action_metrics:

400

for action, distribution in score_metric.action_metrics.items():

401

score_data['action_distributions'][action] = [

402

{

403

'lower_bound': bucket.lower_bound,

404

'upper_bound': bucket.upper_bound,

405

'count': bucket.count

406

}

407

for bucket in distribution.score_buckets

408

]

409

410

metrics_data['score_metrics'].append(score_data)

411

412

# Process challenge metrics

413

if metrics.challenge_metrics:

414

for challenge_metric in metrics.challenge_metrics:

415

challenge_data = {

416

'pageload_count': challenge_metric.pageload_count,

417

'nocaptcha_count': challenge_metric.nocaptcha_count,

418

'failed_count': challenge_metric.failed_count,

419

'passed_count': challenge_metric.passed_count

420

}

421

metrics_data['challenge_metrics'].append(challenge_data)

422

423

# Write to file or return data

424

if output_file:

425

with open(output_file, 'w') as f:

426

json.dump(metrics_data, f, indent=2)

427

print(f"Metrics exported to {output_file}")

428

429

return metrics_data

430

431

except Exception as e:

432

print(f"Error exporting metrics: {e}")

433

return None

434

435

# Export metrics to JSON

436

metrics_data = export_metrics_to_json(

437

client,

438

"your-project-id",

439

"your-key-id",

440

"recaptcha_metrics.json"

441

)

442

```

443

444

### Automated Monitoring and Dashboards

445

446

```python

447

def create_metrics_summary(client, project_id, key_ids):

448

"""Create a summary dashboard of key metrics."""

449

450

dashboard_data = {

451

'generated_at': datetime.utcnow().isoformat(),

452

'project_id': project_id,

453

'summary': {

454

'total_keys': len(key_ids),

455

'keys_with_data': 0,

456

'total_assessments': 0,

457

'overall_avg_score': 0,

458

'alerts': []

459

},

460

'key_details': {}

461

}

462

463

all_assessments = 0

464

all_weighted_scores = 0

465

466

for key_id in key_ids:

467

metrics_name = f"projects/{project_id}/keys/{key_id}/metrics"

468

request = recaptchaenterprise.GetMetricsRequest(name=metrics_name)

469

470

try:

471

metrics = client.get_metrics(request=request)

472

473

if metrics.score_metrics:

474

dashboard_data['summary']['keys_with_data'] += 1

475

476

# Calculate key statistics

477

overall_metrics = metrics.score_metrics[0].overall_metrics

478

key_assessments = sum(bucket.count for bucket in overall_metrics.score_buckets)

479

480

key_weighted_score = sum(

481

bucket.count * ((bucket.lower_bound + bucket.upper_bound) / 2)

482

for bucket in overall_metrics.score_buckets

483

)

484

485

key_avg_score = key_weighted_score / key_assessments if key_assessments > 0 else 0

486

487

all_assessments += key_assessments

488

all_weighted_scores += key_weighted_score

489

490

# Store key details

491

dashboard_data['key_details'][key_id] = {

492

'total_assessments': key_assessments,

493

'average_score': key_avg_score,

494

'low_score_percentage': calculate_low_score_percentage(metrics)

495

}

496

497

except Exception as e:

498

dashboard_data['key_details'][key_id] = {

499

'error': str(e)

500

}

501

502

# Calculate overall statistics

503

dashboard_data['summary']['total_assessments'] = all_assessments

504

dashboard_data['summary']['overall_avg_score'] = (

505

all_weighted_scores / all_assessments if all_assessments > 0 else 0

506

)

507

508

# Generate alerts

509

for key_id, details in dashboard_data['key_details'].items():

510

if 'error' not in details:

511

if details['low_score_percentage'] > 30:

512

dashboard_data['summary']['alerts'].append(

513

f"Key {key_id}: High suspicious activity ({details['low_score_percentage']:.1f}% low scores)"

514

)

515

if details['average_score'] < 0.5:

516

dashboard_data['summary']['alerts'].append(

517

f"Key {key_id}: Low average score ({details['average_score']:.2f})"

518

)

519

520

return dashboard_data

521

522

# Create dashboard

523

dashboard = create_metrics_summary(client, "your-project-id", ["key1", "key2", "key3"])

524

print(json.dumps(dashboard, indent=2))

525

```

526

527

## Error Handling

528

529

```python

530

from google.api_core import exceptions

531

532

try:

533

metrics = client.get_metrics(request=request)

534

except exceptions.NotFound as e:

535

print(f"Key not found or no metrics available: {e}")

536

# Key may not exist or may not have sufficient usage for metrics

537

except exceptions.PermissionDenied as e:

538

print(f"Insufficient permissions to access metrics: {e}")

539

# Check IAM permissions for the project and key

540

except exceptions.InvalidArgument as e:

541

print(f"Invalid metrics request: {e}")

542

# Check the metrics resource name format

543

```

544

545

## Best Practices

546

547

### Metrics Collection

548

- Regularly collect metrics to establish baselines

549

- Store historical metrics data for trend analysis

550

- Set up automated alerting for unusual patterns

551

- Monitor both score and challenge metrics

552

553

### Analysis and Interpretation

554

- Consider seasonal patterns and legitimate traffic variations

555

- Correlate metrics with business events and marketing campaigns

556

- Use multiple metrics together for comprehensive analysis

557

- Establish realistic thresholds based on your application's usage patterns

558

559

### Performance Optimization

560

- Use metrics to identify optimal score thresholds

561

- Adjust challenge preferences based on completion rates

562

- Monitor the impact of configuration changes

563

- Balance security and user experience based on metrics

564

565

### Monitoring and Alerting

566

- Set up dashboards for real-time monitoring

567

- Create alerts for significant deviations from baseline

568

- Monitor key performance indicators regularly

569

- Implement automated responses to certain metric patterns

570

571

### Data Retention and Compliance

572

- Understand metrics data retention policies

573

- Export important metrics data for long-term storage

574

- Ensure compliance with data protection regulations

575

- Document metrics collection and analysis procedures