or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

backup-services.mdcompute-services.mdcontainer-services.mdcore-driver-system.mddns-management.mdindex.mdload-balancer-services.mdstorage-services.md

backup-services.mddocs/

0

# Backup Services

1

2

The backup service provides a unified interface for backup and snapshot management across multiple cloud backup providers including AWS EBS Snapshots, Google Persistent Disk Snapshots, Azure Disk Snapshots, and other backup-as-a-service providers.

3

4

## Providers

5

6

```python { .api }

7

from libcloud.backup.types import Provider

8

9

class Provider:

10

"""Enumeration of supported backup providers"""

11

EBS = 'ebs' # AWS EBS Snapshots

12

GCE = 'gce' # Google Persistent Disk Snapshots

13

AZURE_ARM = 'azure_arm' # Azure Resource Manager Snapshots

14

DIMENSIONDATA = 'dimensiondata' # Dimension Data Backup

15

CLOUDSTACK = 'cloudstack' # CloudStack Snapshots

16

# ... more providers

17

```

18

19

## Driver Factory

20

21

```python { .api }

22

from libcloud.backup.providers import get_driver

23

24

def get_driver(provider: Provider) -> type[BackupDriver]

25

```

26

27

Get the driver class for a specific backup provider.

28

29

**Parameters:**

30

- `provider`: Provider identifier from the Provider enum

31

32

**Returns:**

33

- Driver class for the specified provider

34

35

**Example:**

36

```python

37

from libcloud.backup.types import Provider

38

from libcloud.backup.providers import get_driver

39

40

# Get AWS EBS backup driver class

41

cls = get_driver(Provider.EBS)

42

43

# Initialize driver with credentials

44

driver = cls('access_key', 'secret_key', region='us-east-1')

45

```

46

47

## Core Classes

48

49

### BackupDriver

50

51

```python { .api }

52

class BackupDriver(BaseDriver):

53

"""Base class for all backup drivers"""

54

55

def list_targets(self) -> List[BackupTarget]

56

def get_target(self, target_id: str) -> BackupTarget

57

def create_target_from_node(self, node: Node, name: str = None, ex_use_tags: bool = True) -> BackupTarget

58

def create_target_from_container(self, container: Container, name: str = None) -> BackupTarget

59

def update_target(self, target: BackupTarget, name: str = None, extra: Dict = None) -> BackupTarget

60

def delete_target(self, target: BackupTarget) -> bool

61

def list_recovery_points(self, target: BackupTarget, start_date: datetime = None, end_date: datetime = None) -> List[BackupTarget]

62

def recover_target(self, target: BackupTarget, recovery_point: BackupTarget, recovery_target_name: str = None) -> Node

63

def recover_target_out_of_place(self, target: BackupTarget, recovery_point: BackupTarget, recovery_target_name: str = None, **kwargs) -> Node

64

def create_target_backup_job(self, target: BackupTarget, extra: Dict = None) -> BackupTargetJob

65

def list_target_jobs(self, target: BackupTarget) -> List[BackupTargetJob]

66

def ex_list_available_backup_locations(self) -> List[Dict]

67

```

68

69

Base class that all backup drivers inherit from. Provides methods for managing backup targets, recovery points, and backup jobs.

70

71

**Key Methods:**

72

73

- `list_targets()`: List all backup targets

74

- `create_target_from_node()`: Create backup target from compute node

75

- `list_recovery_points()`: List available recovery points for a target

76

- `recover_target()`: Restore a target from a recovery point

77

- `create_target_backup_job()`: Create a backup job

78

- `delete_target()`: Delete a backup target

79

80

### BackupTarget

81

82

```python { .api }

83

class BackupTarget:

84

"""Represents a backup target"""

85

86

id: str

87

name: str

88

address: str

89

type: BackupTargetType

90

size: int

91

driver: BackupDriver

92

extra: Dict[str, Any]

93

94

def list_recovery_points(self, start_date: datetime = None, end_date: datetime = None) -> List[BackupTarget]

95

def recover(self, recovery_point: BackupTarget, recovery_target_name: str = None) -> Node

96

def backup(self, name: str = None) -> BackupTargetJob

97

def delete(self) -> bool

98

```

99

100

Represents a backup target (source for backups like a disk, volume, or node).

101

102

**Properties:**

103

- `id`: Unique backup target identifier

104

- `name`: Human-readable name

105

- `address`: Target address/identifier (volume ID, node ID, etc.)

106

- `type`: Type of backup target (volume, node, etc.)

107

- `size`: Size in bytes of the backup target

108

- `extra`: Provider-specific metadata

109

110

**Methods:**

111

- `list_recovery_points()`: List recovery points for this target

112

- `recover()`: Restore this target from a recovery point

113

- `backup()`: Create a backup of this target

114

- `delete()`: Delete this backup target

115

116

### BackupTargetJob

117

118

```python { .api }

119

class BackupTargetJob:

120

"""Represents a backup job"""

121

122

id: str

123

target_id: str

124

status: BackupTargetJobStatusType

125

progress: float

126

created_at: datetime

127

driver: BackupDriver

128

extra: Dict[str, Any]

129

```

130

131

Represents a backup job/operation.

132

133

**Properties:**

134

- `id`: Unique job identifier

135

- `target_id`: ID of the backup target

136

- `status`: Current job status (running, completed, failed, etc.)

137

- `progress`: Job progress as percentage (0.0 to 1.0)

138

- `created_at`: Job creation timestamp

139

- `extra`: Provider-specific job metadata

140

141

### BackupTargetType

142

143

```python { .api }

144

class BackupTargetType:

145

"""Backup target types enumeration"""

146

VOLUME = 'volume'

147

NODE = 'node'

148

CONTAINER = 'container'

149

FILE_SYSTEM = 'file_system'

150

DATABASE = 'database'

151

VIRTUAL_MACHINE = 'virtual_machine'

152

```

153

154

Enumeration of supported backup target types.

155

156

### BackupTargetJobStatusType

157

158

```python { .api }

159

class BackupTargetJobStatusType:

160

"""Backup job status types enumeration"""

161

PENDING = 'pending'

162

RUNNING = 'running'

163

COMPLETED = 'completed'

164

FAILED = 'failed'

165

CANCELLED = 'cancelled'

166

UNKNOWN = 'unknown'

167

```

168

169

Enumeration of possible backup job statuses.

170

171

## Usage Examples

172

173

### Basic Backup Target Management

174

175

```python

176

from libcloud.backup.types import Provider, BackupTargetType

177

from libcloud.backup.providers import get_driver

178

from libcloud.compute.types import Provider as ComputeProvider

179

from libcloud.compute.providers import get_driver as get_compute_driver

180

181

# Initialize backup driver (AWS EBS example)

182

backup_cls = get_driver(Provider.EBS)

183

backup_driver = backup_cls('access_key', 'secret_key', region='us-east-1')

184

185

# Initialize compute driver to get nodes

186

compute_cls = get_compute_driver(ComputeProvider.EC2)

187

compute_driver = compute_cls('access_key', 'secret_key', region='us-east-1')

188

189

# List existing backup targets

190

targets = backup_driver.list_targets()

191

print(f"Existing backup targets: {len(targets)}")

192

193

for target in targets:

194

print(f"Target: {target.name} (Type: {target.type}, Size: {target.size} bytes)")

195

print(f" Address: {target.address}")

196

print(f" Created: {target.extra.get('created_at', 'unknown')}")

197

198

# Create backup target from a compute node

199

nodes = compute_driver.list_nodes()

200

if nodes:

201

node = nodes[0] # Use first node

202

backup_target = backup_driver.create_target_from_node(

203

node=node,

204

name=f'backup-{node.name}',

205

ex_use_tags=True

206

)

207

print(f"Created backup target: {backup_target.name} ({backup_target.id})")

208

```

209

210

### Backup Creation and Management

211

212

```python

213

# Get a backup target

214

target = backup_driver.get_target('backup-target-123')

215

print(f"Backup target: {target.name}")

216

217

# Create a backup job

218

backup_job = backup_driver.create_target_backup_job(

219

target=target,

220

extra={'description': 'Daily backup', 'retention_days': 30}

221

)

222

print(f"Created backup job: {backup_job.id} (Status: {backup_job.status})")

223

224

# Alternative: Create backup using target method

225

backup_job2 = target.backup(name='manual-backup-2023-10-15')

226

print(f"Created backup via target: {backup_job2.id}")

227

228

# List all backup jobs for a target

229

jobs = backup_driver.list_target_jobs(target)

230

print(f"Backup jobs for {target.name}: {len(jobs)}")

231

232

for job in jobs:

233

print(f" Job {job.id}: {job.status} ({job.progress:.1%} complete)")

234

print(f" Created: {job.created_at}")

235

if job.extra:

236

print(f" Extra: {job.extra}")

237

```

238

239

### Recovery Point Management

240

241

```python

242

from datetime import datetime, timedelta

243

244

# List recovery points for a target

245

recovery_points = backup_driver.list_recovery_points(target)

246

print(f"Available recovery points: {len(recovery_points)}")

247

248

for rp in recovery_points:

249

print(f"Recovery Point: {rp.name} (Created: {rp.extra.get('created_at')})")

250

print(f" Size: {rp.size} bytes")

251

252

# List recovery points within a date range

253

end_date = datetime.now()

254

start_date = end_date - timedelta(days=7) # Last 7 days

255

256

recent_recovery_points = backup_driver.list_recovery_points(

257

target,

258

start_date=start_date,

259

end_date=end_date

260

)

261

print(f"Recovery points from last 7 days: {len(recent_recovery_points)}")

262

263

# Alternative: List using target method

264

target_recovery_points = target.list_recovery_points(start_date=start_date)

265

print(f"Target recovery points: {len(target_recovery_points)}")

266

```

267

268

### Backup Recovery and Restoration

269

270

```python

271

# Recover target in place (restore to original location)

272

if recovery_points:

273

latest_recovery_point = recovery_points[0] # Assuming sorted by date

274

275

print(f"Restoring {target.name} from recovery point {latest_recovery_point.name}")

276

restored_node = backup_driver.recover_target(

277

target=target,

278

recovery_point=latest_recovery_point,

279

recovery_target_name=f'restored-{target.name}'

280

)

281

print(f"Restored node: {restored_node.name} ({restored_node.id})")

282

283

# Recover target out of place (restore to new location/instance)

284

if recovery_points:

285

recovery_point = recovery_points[0]

286

287

restored_node = backup_driver.recover_target_out_of_place(

288

target=target,

289

recovery_point=recovery_point,

290

recovery_target_name='disaster-recovery-instance',

291

ex_instance_type='t3.medium', # Different instance type

292

ex_subnet_id='subnet-new-123', # Different subnet

293

ex_security_groups=['sg-disaster-recovery']

294

)

295

print(f"Out-of-place recovery completed: {restored_node.name}")

296

297

# Alternative: Recover using target method

298

if recovery_points:

299

restored_via_target = target.recover(

300

recovery_point=recovery_points[0],

301

recovery_target_name='target-method-recovery'

302

)

303

print(f"Recovered via target method: {restored_via_target.name}")

304

```

305

306

### Automated Backup Scheduling

307

308

```python

309

import time

310

from datetime import datetime, timedelta

311

from typing import List, Dict

312

313

def create_backup_schedule(backup_driver, targets: List[BackupTarget], schedule_config: Dict):

314

"""Create automated backup schedule"""

315

316

def should_backup(target: BackupTarget, config: Dict) -> bool:

317

"""Check if target should be backed up based on schedule"""

318

319

# Get last backup time

320

jobs = backup_driver.list_target_jobs(target)

321

completed_jobs = [j for j in jobs if j.status == 'completed']

322

323

if not completed_jobs:

324

return True # No backups yet

325

326

# Sort by creation time and get latest

327

latest_job = max(completed_jobs, key=lambda j: j.created_at)

328

last_backup = latest_job.created_at

329

330

# Check if enough time has passed

331

interval_hours = config.get('interval_hours', 24)

332

time_since_backup = datetime.now() - last_backup

333

334

return time_since_backup >= timedelta(hours=interval_hours)

335

336

# Main scheduling loop

337

print(f"Starting backup scheduler for {len(targets)} targets")

338

339

while True:

340

try:

341

for target in targets:

342

target_config = schedule_config.get(

343

target.name,

344

schedule_config.get('default', {})

345

)

346

347

if should_backup(target, target_config):

348

print(f"Creating scheduled backup for {target.name}")

349

350

backup_job = backup_driver.create_target_backup_job(

351

target=target,

352

extra={

353

'scheduled': True,

354

'retention_days': target_config.get('retention_days', 7)

355

}

356

)

357

print(f" Created job: {backup_job.id}")

358

359

# Clean up old backups if configured

360

if target_config.get('cleanup_old_backups', False):

361

cleanup_old_backups(backup_driver, target, target_config)

362

363

except Exception as e:

364

print(f"Error in backup scheduler: {e}")

365

366

# Wait before next check

367

sleep_minutes = schedule_config.get('check_interval_minutes', 60)

368

time.sleep(sleep_minutes * 60)

369

370

def cleanup_old_backups(backup_driver, target: BackupTarget, config: Dict):

371

"""Clean up old backup recovery points"""

372

373

retention_days = config.get('retention_days', 7)

374

cutoff_date = datetime.now() - timedelta(days=retention_days)

375

376

recovery_points = backup_driver.list_recovery_points(target)

377

378

for rp in recovery_points:

379

created_date = rp.extra.get('created_at')

380

if created_date and isinstance(created_date, datetime) and created_date < cutoff_date:

381

try:

382

success = backup_driver.delete_target(rp)

383

if success:

384

print(f" Cleaned up old backup: {rp.name}")

385

except Exception as e:

386

print(f" Failed to clean up {rp.name}: {e}")

387

388

# Usage example

389

schedule_config = {

390

'default': {

391

'interval_hours': 24, # Daily backups

392

'retention_days': 7, # Keep for 7 days

393

'cleanup_old_backups': True

394

},

395

'critical-db-backup': {

396

'interval_hours': 4, # Every 4 hours for critical systems

397

'retention_days': 30, # Keep for 30 days

398

'cleanup_old_backups': True

399

},

400

'check_interval_minutes': 60 # Check every hour

401

}

402

403

# Get targets to backup

404

all_targets = backup_driver.list_targets()

405

important_targets = [t for t in all_targets if 'prod' in t.name.lower()]

406

407

# Start scheduler (run in separate thread/process)

408

# create_backup_schedule(backup_driver, important_targets, schedule_config)

409

```

410

411

### Cross-Provider Backup Strategy

412

413

```python

414

from libcloud.backup.types import Provider as BackupProvider

415

from libcloud.backup.providers import get_driver as get_backup_driver

416

417

# Configure multiple backup providers for redundancy

418

backup_providers = {

419

'aws_ebs': {

420

'driver': get_backup_driver(BackupProvider.EBS),

421

'credentials': ('aws_access_key', 'aws_secret_key'),

422

'region': 'us-east-1'

423

},

424

'gce_snapshots': {

425

'driver': get_backup_driver(BackupProvider.GCE),

426

'credentials': ('service_account_email', 'key_file_path'),

427

'project': 'my-project'

428

}

429

}

430

431

# Initialize backup drivers

432

backup_drivers = {}

433

for name, config in backup_providers.items():

434

cls = config['driver']

435

if name == 'aws_ebs':

436

backup_drivers[name] = cls(*config['credentials'], region=config['region'])

437

elif name == 'gce_snapshots':

438

backup_drivers[name] = cls(*config['credentials'], project=config['project'])

439

440

def create_cross_provider_backup(compute_node, backup_name: str):

441

"""Create backups across multiple providers for redundancy"""

442

443

backup_results = {}

444

445

for provider_name, backup_driver in backup_drivers.items():

446

try:

447

print(f"Creating backup on {provider_name}...")

448

449

# Create backup target

450

target = backup_driver.create_target_from_node(

451

node=compute_node,

452

name=f'{backup_name}-{provider_name}'

453

)

454

455

# Create backup job

456

job = backup_driver.create_target_backup_job(

457

target=target,

458

extra={'cross_provider_backup': True}

459

)

460

461

backup_results[provider_name] = {

462

'target': target,

463

'job': job,

464

'status': 'initiated'

465

}

466

467

print(f" Backup initiated: {job.id}")

468

469

except Exception as e:

470

print(f" Failed to create backup on {provider_name}: {e}")

471

backup_results[provider_name] = {

472

'status': 'failed',

473

'error': str(e)

474

}

475

476

return backup_results

477

478

# Usage

479

node = compute_driver.list_nodes()[0] # Get a compute node

480

cross_provider_backups = create_cross_provider_backup(node, 'disaster-recovery-backup')

481

```

482

483

### Backup Monitoring and Reporting

484

485

```python

486

import json

487

from datetime import datetime, timedelta

488

from typing import Dict, List

489

490

def generate_backup_report(backup_driver, targets: List[BackupTarget] = None) -> Dict:

491

"""Generate comprehensive backup report"""

492

493

if targets is None:

494

targets = backup_driver.list_targets()

495

496

report = {

497

'generated_at': datetime.now().isoformat(),

498

'total_targets': len(targets),

499

'target_summary': [],

500

'overall_stats': {

501

'healthy_targets': 0,

502

'targets_with_recent_backups': 0,

503

'failed_jobs_last_24h': 0,

504

'total_backup_size': 0

505

}

506

}

507

508

cutoff_24h = datetime.now() - timedelta(hours=24)

509

cutoff_7d = datetime.now() - timedelta(days=7)

510

511

for target in targets:

512

try:

513

# Get jobs for this target

514

jobs = backup_driver.list_target_jobs(target)

515

516

# Get recovery points

517

recovery_points = backup_driver.list_recovery_points(target)

518

519

# Analyze job status

520

recent_jobs = [j for j in jobs if j.created_at >= cutoff_24h]

521

failed_jobs_24h = [j for j in recent_jobs if j.status == 'failed']

522

successful_jobs = [j for j in jobs if j.status == 'completed']

523

524

# Find latest successful backup

525

latest_successful = None

526

if successful_jobs:

527

latest_successful = max(successful_jobs, key=lambda j: j.created_at)

528

529

# Determine target health

530

is_healthy = (

531

len(failed_jobs_24h) == 0 and

532

latest_successful is not None and

533

latest_successful.created_at >= cutoff_7d

534

)

535

536

target_info = {

537

'name': target.name,

538

'id': target.id,

539

'type': target.type,

540

'size_bytes': target.size,

541

'is_healthy': is_healthy,

542

'total_jobs': len(jobs),

543

'failed_jobs_24h': len(failed_jobs_24h),

544

'recovery_points_count': len(recovery_points),

545

'latest_backup': latest_successful.created_at.isoformat() if latest_successful else None,

546

'days_since_backup': (datetime.now() - latest_successful.created_at).days if latest_successful else None

547

}

548

549

report['target_summary'].append(target_info)

550

551

# Update overall stats

552

if is_healthy:

553

report['overall_stats']['healthy_targets'] += 1

554

555

if latest_successful and latest_successful.created_at >= cutoff_7d:

556

report['overall_stats']['targets_with_recent_backups'] += 1

557

558

report['overall_stats']['failed_jobs_last_24h'] += len(failed_jobs_24h)

559

report['overall_stats']['total_backup_size'] += target.size

560

561

except Exception as e:

562

print(f"Error analyzing target {target.name}: {e}")

563

target_info = {

564

'name': target.name,

565

'id': target.id,

566

'error': str(e),

567

'is_healthy': False

568

}

569

report['target_summary'].append(target_info)

570

571

return report

572

573

def monitor_backup_jobs(backup_driver, targets: List[BackupTarget], alert_callback=None):

574

"""Monitor backup job progress and alert on failures"""

575

576

active_jobs = {} # Track jobs we're monitoring

577

578

while True:

579

try:

580

for target in targets:

581

jobs = backup_driver.list_target_jobs(target)

582

583

for job in jobs:

584

if job.status in ['pending', 'running']:

585

# Track or update active job

586

if job.id not in active_jobs:

587

active_jobs[job.id] = {

588

'job': job,

589

'target': target,

590

'started_monitoring': datetime.now()

591

}

592

print(f"Started monitoring job {job.id} for {target.name}")

593

else:

594

# Update progress

595

old_progress = active_jobs[job.id]['job'].progress

596

if job.progress > old_progress:

597

print(f"Job {job.id} progress: {job.progress:.1%}")

598

active_jobs[job.id]['job'] = job

599

600

elif job.status in ['completed', 'failed', 'cancelled']:

601

# Job finished

602

if job.id in active_jobs:

603

duration = datetime.now() - active_jobs[job.id]['started_monitoring']

604

print(f"Job {job.id} finished: {job.status} (Duration: {duration})")

605

606

if job.status == 'failed' and alert_callback:

607

alert_callback(f"Backup job failed: {job.id} for target {target.name}")

608

609

del active_jobs[job.id]

610

611

# Check for stuck jobs

612

stuck_threshold = timedelta(hours=4)

613

current_time = datetime.now()

614

615

for job_id, job_info in list(active_jobs.items()):

616

monitoring_duration = current_time - job_info['started_monitoring']

617

if monitoring_duration > stuck_threshold:

618

print(f"WARNING: Job {job_id} appears stuck (running for {monitoring_duration})")

619

if alert_callback:

620

alert_callback(f"Backup job appears stuck: {job_id}")

621

622

except Exception as e:

623

print(f"Error monitoring backup jobs: {e}")

624

625

time.sleep(60) # Check every minute

626

627

# Usage examples

628

def backup_alert_handler(message: str):

629

"""Handle backup alerts (email, Slack, etc.)"""

630

print(f"ALERT: {message}")

631

# Implement your alerting mechanism here

632

633

# Generate backup report

634

backup_report = generate_backup_report(backup_driver)

635

636

# Save report to file

637

with open(f'backup_report_{datetime.now().strftime("%Y%m%d_%H%M%S")}.json', 'w') as f:

638

json.dump(backup_report, f, indent=2)

639

640

# Print summary

641

print(f"Backup Report Summary:")

642

print(f" Total targets: {backup_report['total_targets']}")

643

print(f" Healthy targets: {backup_report['overall_stats']['healthy_targets']}")

644

print(f" Recent backups: {backup_report['overall_stats']['targets_with_recent_backups']}")

645

print(f" Failed jobs (24h): {backup_report['overall_stats']['failed_jobs_last_24h']}")

646

647

# Start monitoring (run in separate thread/process)

648

# all_targets = backup_driver.list_targets()

649

# monitor_backup_jobs(backup_driver, all_targets, backup_alert_handler)

650

```

651

652

### Disaster Recovery Planning

653

654

```python

655

def create_disaster_recovery_plan(backup_driver, compute_driver, targets: List[BackupTarget]):

656

"""Create a disaster recovery plan with automated recovery procedures"""

657

658

dr_plan = {

659

'created_at': datetime.now().isoformat(),

660

'targets': [],

661

'recovery_procedures': []

662

}

663

664

for target in targets:

665

# Get recovery points

666

recovery_points = backup_driver.list_recovery_points(target)

667

668

if not recovery_points:

669

print(f"WARNING: No recovery points found for {target.name}")

670

continue

671

672

# Find best recovery point (most recent successful)

673

best_recovery_point = max(recovery_points, key=lambda rp: rp.extra.get('created_at', datetime.min))

674

675

target_dr_info = {

676

'target_id': target.id,

677

'target_name': target.name,

678

'target_type': target.type,

679

'best_recovery_point': best_recovery_point.id,

680

'recovery_point_date': best_recovery_point.extra.get('created_at'),

681

'estimated_recovery_time_minutes': estimate_recovery_time(target),

682

'recovery_priority': get_recovery_priority(target),

683

'dependencies': get_target_dependencies(target)

684

}

685

686

dr_plan['targets'].append(target_dr_info)

687

688

# Sort by priority (higher number = higher priority)

689

dr_plan['targets'].sort(key=lambda t: t['recovery_priority'], reverse=True)

690

691

# Generate recovery procedures

692

for i, target_info in enumerate(dr_plan['targets']):

693

procedure = {

694

'step': i + 1,

695

'target': target_info['target_name'],

696

'action': 'recover_target',

697

'parameters': {

698

'target_id': target_info['target_id'],

699

'recovery_point_id': target_info['best_recovery_point'],

700

'recovery_name': f"dr-{target_info['target_name']}-{datetime.now().strftime('%Y%m%d')}"

701

},

702

'estimated_duration_minutes': target_info['estimated_recovery_time_minutes'],

703

'dependencies': target_info['dependencies']

704

}

705

dr_plan['recovery_procedures'].append(procedure)

706

707

return dr_plan

708

709

def estimate_recovery_time(target: BackupTarget) -> int:

710

"""Estimate recovery time based on target size and type"""

711

size_gb = target.size / (1024 ** 3)

712

713

# Base time estimates (minutes)

714

base_times = {

715

'volume': 2, # 2 minutes per GB for volumes

716

'node': 5, # 5 minutes per GB for full nodes

717

'container': 1, # 1 minute per GB for containers

718

'database': 3 # 3 minutes per GB for databases

719

}

720

721

base_time = base_times.get(target.type, 3)

722

return max(int(size_gb * base_time), 15) # Minimum 15 minutes

723

724

def get_recovery_priority(target: BackupTarget) -> int:

725

"""Determine recovery priority (1-10, 10 being highest)"""

726

name_lower = target.name.lower()

727

728

if 'critical' in name_lower or 'prod' in name_lower:

729

return 10

730

elif 'important' in name_lower or 'web' in name_lower:

731

return 7

732

elif 'db' in name_lower or 'database' in name_lower:

733

return 9

734

elif 'test' in name_lower or 'dev' in name_lower:

735

return 3

736

else:

737

return 5

738

739

def get_target_dependencies(target: BackupTarget) -> List[str]:

740

"""Get list of dependencies for recovery ordering"""

741

# This would typically analyze the target's configuration

742

# For now, return basic dependencies based on naming

743

dependencies = []

744

name_lower = target.name.lower()

745

746

if 'web' in name_lower:

747

dependencies.extend(['database', 'cache'])

748

elif 'app' in name_lower:

749

dependencies.extend(['database'])

750

751

return dependencies

752

753

def execute_disaster_recovery(backup_driver, compute_driver, dr_plan: Dict):

754

"""Execute disaster recovery plan"""

755

756

print("Starting disaster recovery execution...")

757

print(f"Plan created: {dr_plan['created_at']}")

758

print(f"Total targets to recover: {len(dr_plan['targets'])}")

759

760

recovered_targets = {}

761

762

for procedure in dr_plan['recovery_procedures']:

763

step = procedure['step']

764

target_name = procedure['target']

765

766

print(f"\nStep {step}: Recovering {target_name}")

767

print(f"Estimated duration: {procedure['estimated_duration_minutes']} minutes")

768

769

# Check dependencies

770

dependencies = procedure['dependencies']

771

if dependencies:

772

print(f"Dependencies: {', '.join(dependencies)}")

773

for dep in dependencies:

774

if dep not in recovered_targets:

775

print(f"WARNING: Dependency {dep} not yet recovered")

776

777

try:

778

# Get target and recovery point

779

target = backup_driver.get_target(procedure['parameters']['target_id'])

780

recovery_points = backup_driver.list_recovery_points(target)

781

recovery_point = next(

782

rp for rp in recovery_points

783

if rp.id == procedure['parameters']['recovery_point_id']

784

)

785

786

# Execute recovery

787

print(f"Recovering from point: {recovery_point.name}")

788

recovered_node = backup_driver.recover_target_out_of_place(

789

target=target,

790

recovery_point=recovery_point,

791

recovery_target_name=procedure['parameters']['recovery_name']

792

)

793

794

recovered_targets[target_name] = recovered_node

795

print(f"✓ Recovery completed: {recovered_node.name} ({recovered_node.id})")

796

797

except Exception as e:

798

print(f"✗ Recovery failed for {target_name}: {e}")

799

# Log failure and continue with next target

800

801

print(f"\nDisaster recovery completed. Recovered {len(recovered_targets)} targets.")

802

return recovered_targets

803

804

# Usage

805

all_targets = backup_driver.list_targets()

806

critical_targets = [t for t in all_targets if 'prod' in t.name.lower() or 'critical' in t.name.lower()]

807

808

# Create DR plan

809

dr_plan = create_disaster_recovery_plan(backup_driver, compute_driver, critical_targets)

810

811

# Save DR plan

812

with open(f'disaster_recovery_plan_{datetime.now().strftime("%Y%m%d")}.json', 'w') as f:

813

json.dump(dr_plan, f, indent=2)

814

815

print("Disaster Recovery Plan created:")

816

for procedure in dr_plan['recovery_procedures']:

817

print(f" Step {procedure['step']}: {procedure['target']} ({procedure['estimated_duration_minutes']} min)")

818

819

# Execute DR plan (only in actual disaster scenario)

820

# recovered_nodes = execute_disaster_recovery(backup_driver, compute_driver, dr_plan)

821

```

822

823

## Exception Handling

824

825

```python

826

from libcloud.backup.types import BackupError

827

from libcloud.common.types import LibcloudError, InvalidCredsError

828

829

try:

830

# Create backup target

831

target = backup_driver.create_target_from_node(node, name='test-backup')

832

833

# Create backup job

834

job = backup_driver.create_target_backup_job(target)

835

836

except InvalidCredsError:

837

print("Invalid credentials for backup provider")

838

except BackupError as e:

839

print(f"Backup specific error: {e}")

840

except LibcloudError as e:

841

print(f"General Libcloud error: {e}")

842

843

# Check job status before operations

844

if job.status == 'completed':

845

recovery_points = backup_driver.list_recovery_points(target)

846

elif job.status == 'failed':

847

print(f"Backup job failed: {job.extra.get('error_message', 'Unknown error')}")

848

```

849

850

## Provider-Specific Features

851

852

Different providers offer additional features through the `ex_*` parameter pattern:

853

854

```python

855

# AWS EBS specific features

856

ebs_driver = get_driver(Provider.EBS)('access_key', 'secret_key', region='us-east-1')

857

858

# Create backup with EBS-specific options

859

ebs_backup = ebs_driver.create_target_backup_job(

860

target=target,

861

extra={

862

'description': 'Daily automated backup',

863

'encrypted': True, # Encrypt snapshot

864

'copy_tags': True, # Copy tags from source volume

865

'kms_key_id': 'arn:aws:kms:...', # Custom KMS key

866

}

867

)

868

869

# List available backup locations

870

backup_locations = ebs_driver.ex_list_available_backup_locations()

871

for location in backup_locations:

872

print(f"Backup location: {location['name']} ({location['region']})")

873

874

# Google Cloud specific features

875

gce_driver = get_driver(Provider.GCE)('email', 'key_file', project='my-project')

876

877

# Create backup with GCE-specific options

878

gce_backup = gce_driver.create_target_backup_job(

879

target=target,

880

extra={

881

'storage_location': 'us-central1', # Regional storage

882

'labels': {'environment': 'production', 'team': 'devops'}

883

}

884

)

885

```

886

887

Check provider-specific documentation for additional capabilities available through the `ex_*` parameters and methods.