0
# Inference Algorithms
1
2
Exact and approximate inference algorithms for computing marginal probabilities, MAP queries, and causal inference. pgmpy provides comprehensive inference capabilities for different types of queries and model structures.
3
4
## Capabilities
5
6
### Variable Elimination
7
8
Exact inference algorithm using variable elimination for computing marginal probabilities and MAP queries.
9
10
```python { .api }
11
class VariableElimination:
12
def __init__(self, model):
13
"""
14
Initialize variable elimination inference for a model.
15
16
Parameters:
17
- model: DiscreteBayesianNetwork, MarkovNetwork, or FactorGraph
18
"""
19
20
def query(self, variables, evidence=None, elimination_order="MinFill",
21
joint=True, show_progress=True):
22
"""
23
Compute marginal probability distribution.
24
25
Parameters:
26
- variables: list of query variables
27
- evidence: dict of observed evidence {variable: value}
28
- elimination_order: variable elimination order ("MinFill", "MinNeighbors", "MinWeight", or list)
29
- joint: whether to return joint distribution over all query variables
30
- show_progress: whether to show progress bar
31
32
Returns:
33
DiscreteFactor: Marginal probability distribution
34
"""
35
36
def max_marginal(self, variables, evidence=None,
37
elimination_order="MinFill", show_progress=True):
38
"""
39
Compute max-marginal probabilities.
40
41
Parameters:
42
- variables: list of query variables
43
- evidence: dict of observed evidence
44
- elimination_order: variable elimination order
45
- show_progress: whether to show progress bar
46
47
Returns:
48
DiscreteFactor: Max-marginal distribution
49
"""
50
51
def map_query(self, variables=None, evidence=None,
52
elimination_order="MinFill", show_progress=True):
53
"""
54
Find Maximum A Posteriori (MAP) assignment.
55
56
Parameters:
57
- variables: list of MAP variables (None for all unobserved)
58
- evidence: dict of observed evidence
59
- elimination_order: variable elimination order
60
- show_progress: whether to show progress bar
61
62
Returns:
63
dict: MAP assignment {variable: value}
64
"""
65
66
def induced_graph(self, elimination_order):
67
"""
68
Get the induced graph for given elimination order.
69
70
Parameters:
71
- elimination_order: list of variables in elimination order
72
73
Returns:
74
networkx.Graph: Induced graph
75
"""
76
77
def induced_width(self, elimination_order):
78
"""
79
Compute induced width (tree width) for elimination order.
80
81
Parameters:
82
- elimination_order: list of variables in elimination order
83
84
Returns:
85
int: Induced width
86
"""
87
```
88
89
### Belief Propagation
90
91
Exact inference using belief propagation on junction trees.
92
93
```python { .api }
94
class BeliefPropagation:
95
def __init__(self, model):
96
"""
97
Initialize belief propagation inference.
98
99
Parameters:
100
- model: DiscreteBayesianNetwork, MarkovNetwork, or JunctionTree
101
"""
102
103
def calibrate(self):
104
"""
105
Calibrate the junction tree by passing messages.
106
107
Returns:
108
None: Calibrates clique potentials in-place
109
"""
110
111
def max_calibrate(self):
112
"""
113
Calibrate junction tree using max-product algorithm.
114
115
Returns:
116
None: Calibrates clique potentials for MAP queries
117
"""
118
119
def query(self, variables, evidence=None, joint=True, show_progress=True):
120
"""
121
Query marginal probabilities after calibration.
122
123
Parameters:
124
- variables: list of query variables
125
- evidence: dict of observed evidence
126
- joint: whether to return joint distribution
127
- show_progress: whether to show progress bar
128
129
Returns:
130
DiscreteFactor: Marginal probability distribution
131
"""
132
133
def map_query(self, variables=None, evidence=None, show_progress=True):
134
"""
135
Find MAP assignment using max-product belief propagation.
136
137
Parameters:
138
- variables: list of MAP variables
139
- evidence: dict of observed evidence
140
- show_progress: whether to show progress bar
141
142
Returns:
143
dict: MAP assignment
144
"""
145
146
def get_cliques(self):
147
"""Get cliques in the junction tree."""
148
149
def get_sepset(self, clique1, clique2):
150
"""Get separator set between two cliques."""
151
```
152
153
### Belief Propagation with Message Passing
154
155
Extended belief propagation with explicit message passing control.
156
157
```python { .api }
158
class BeliefPropagationWithMessagePassing:
159
def __init__(self, model):
160
"""Initialize belief propagation with message passing."""
161
162
def send_messages(self, from_clique, to_clique):
163
"""Send message between specific cliques."""
164
165
def get_messages(self):
166
"""Get all messages in the junction tree."""
167
168
def calibrate_clique(self, clique):
169
"""Calibrate a specific clique."""
170
```
171
172
### Causal Inference
173
174
Algorithms for causal reasoning and effect estimation.
175
176
```python { .api }
177
class CausalInference:
178
def __init__(self, model):
179
"""
180
Initialize causal inference for a causal model.
181
182
Parameters:
183
- model: DiscreteBayesianNetwork representing causal relationships
184
"""
185
186
def estimate_ate(self, treatment, outcome, common_causes=None,
187
effect_modifiers=None):
188
"""
189
Estimate Average Treatment Effect (ATE).
190
191
Parameters:
192
- treatment: name of treatment variable
193
- outcome: name of outcome variable
194
- common_causes: list of common cause variables
195
- effect_modifiers: list of effect modifier variables
196
197
Returns:
198
float: Estimated average treatment effect
199
"""
200
201
def estimate_cate(self, treatment, outcome, common_causes=None,
202
effect_modifiers=None):
203
"""
204
Estimate Conditional Average Treatment Effect (CATE).
205
206
Parameters:
207
- treatment: treatment variable name
208
- outcome: outcome variable name
209
- common_causes: list of common causes
210
- effect_modifiers: list of effect modifiers
211
212
Returns:
213
dict: CATE estimates for each modifier level
214
"""
215
216
def backdoor_adjustment(self, treatment, outcome, backdoor_set):
217
"""
218
Perform backdoor adjustment for causal effect estimation.
219
220
Parameters:
221
- treatment: treatment variable
222
- outcome: outcome variable
223
- backdoor_set: list of backdoor adjustment variables
224
225
Returns:
226
float: Causal effect estimate
227
"""
228
229
def instrumental_variable_estimation(self, treatment, outcome, instrument):
230
"""
231
Estimate causal effect using instrumental variables.
232
233
Parameters:
234
- treatment: treatment variable
235
- outcome: outcome variable
236
- instrument: instrumental variable
237
238
Returns:
239
float: IV-based causal effect estimate
240
"""
241
242
def front_door_adjustment(self, treatment, outcome, mediator_set):
243
"""
244
Perform front-door adjustment for causal inference.
245
246
Parameters:
247
- treatment: treatment variable
248
- outcome: outcome variable
249
- mediator_set: list of mediator variables
250
251
Returns:
252
float: Front-door causal effect estimate
253
"""
254
```
255
256
### Approximate Inference
257
258
Base class and algorithms for approximate inference when exact methods are intractable.
259
260
```python { .api }
261
class ApproxInference:
262
def __init__(self, model):
263
"""
264
Base class for approximate inference algorithms.
265
266
Parameters:
267
- model: probabilistic graphical model
268
"""
269
270
def query(self, variables, evidence=None, n_samples=1000):
271
"""
272
Approximate query using sampling.
273
274
Parameters:
275
- variables: list of query variables
276
- evidence: dict of evidence
277
- n_samples: number of samples for approximation
278
279
Returns:
280
DiscreteFactor: Approximate marginal distribution
281
"""
282
283
def map_query(self, variables=None, evidence=None, n_samples=1000):
284
"""
285
Approximate MAP query.
286
287
Parameters:
288
- variables: list of MAP variables
289
- evidence: dict of evidence
290
- n_samples: number of samples
291
292
Returns:
293
dict: Approximate MAP assignment
294
"""
295
```
296
297
### Dynamic Bayesian Network Inference
298
299
Specialized inference for temporal models.
300
301
```python { .api }
302
class DBNInference:
303
def __init__(self, model):
304
"""
305
Initialize inference for Dynamic Bayesian Networks.
306
307
Parameters:
308
- model: DynamicBayesianNetwork
309
"""
310
311
def forward_inference(self, variables, evidence=None, n_time_slices=1):
312
"""
313
Perform forward inference over time.
314
315
Parameters:
316
- variables: list of query variables
317
- evidence: dict of temporal evidence
318
- n_time_slices: number of time slices
319
320
Returns:
321
list: Marginal distributions for each time slice
322
"""
323
324
def backward_inference(self, variables, evidence=None, n_time_slices=1):
325
"""Perform backward inference (smoothing)."""
326
327
def viterbi(self, evidence=None, n_time_slices=1):
328
"""
329
Find most likely sequence using Viterbi algorithm.
330
331
Parameters:
332
- evidence: temporal evidence
333
- n_time_slices: sequence length
334
335
Returns:
336
list: Most likely state sequence
337
"""
338
339
def particle_filter(self, evidence, n_particles=1000):
340
"""
341
Perform particle filtering for state estimation.
342
343
Parameters:
344
- evidence: temporal evidence
345
- n_particles: number of particles
346
347
Returns:
348
list: Particle-based state estimates
349
"""
350
```
351
352
### Max-Product Linear Programming
353
354
Inference using linear programming relaxation.
355
356
```python { .api }
357
class Mplp:
358
def __init__(self, model):
359
"""
360
Initialize Max-Product Linear Programming inference.
361
362
Parameters:
363
- model: MarkovNetwork or FactorGraph
364
"""
365
366
def map_query(self, evidence=None, max_iter=100, tol=1e-6):
367
"""
368
Find MAP assignment using MPLP.
369
370
Parameters:
371
- evidence: dict of evidence
372
- max_iter: maximum iterations
373
- tol: convergence tolerance
374
375
Returns:
376
dict: MAP assignment
377
"""
378
379
def get_dual_objective(self):
380
"""Get dual objective value."""
381
382
def get_primal_objective(self):
383
"""Get primal objective value."""
384
```
385
386
### Sampling-Based Inference
387
388
Inference algorithms based on Monte Carlo sampling.
389
390
```python { .api }
391
class BayesianModelSampling:
392
def __init__(self, model):
393
"""
394
Initialize sampling-based inference.
395
396
Parameters:
397
- model: DiscreteBayesianNetwork
398
"""
399
400
def forward_sample(self, size=1, seed=None, include_latents=False,
401
partial_samples=None, show_progress=True):
402
"""
403
Generate samples using forward sampling.
404
405
Parameters:
406
- size: number of samples
407
- seed: random seed
408
- include_latents: whether to include latent variables
409
- partial_samples: pre-specified partial samples
410
- show_progress: whether to show progress bar
411
412
Returns:
413
pandas.DataFrame: Generated samples
414
"""
415
416
def rejection_sample(self, evidence=[], size=1, seed=None,
417
include_latents=False, show_progress=True):
418
"""
419
Generate samples using rejection sampling.
420
421
Parameters:
422
- evidence: list of evidence as State objects
423
- size: number of samples
424
- seed: random seed
425
- include_latents: whether to include latents
426
- show_progress: whether to show progress bar
427
428
Returns:
429
pandas.DataFrame: Samples satisfying evidence
430
"""
431
432
def likelihood_weighted_sample(self, evidence=[], size=1, seed=None,
433
include_latents=False, show_progress=True):
434
"""
435
Generate weighted samples using likelihood weighting.
436
437
Parameters:
438
- evidence: list of evidence
439
- size: number of samples
440
- seed: random seed
441
- include_latents: whether to include latents
442
- show_progress: whether to show progress bar
443
444
Returns:
445
pandas.DataFrame: Weighted samples with 'weight' column
446
"""
447
448
class GibbsSampling:
449
def __init__(self, model=None):
450
"""
451
Initialize Gibbs sampling MCMC.
452
453
Parameters:
454
- model: DiscreteBayesianNetwork or MarkovNetwork
455
"""
456
457
def sample(self, start_state=None, size=1, seed=None, include_latents=False):
458
"""
459
Generate samples using Gibbs sampling.
460
461
Parameters:
462
- start_state: initial state for Markov chain
463
- size: number of samples
464
- seed: random seed
465
- include_latents: whether to include latent variables
466
467
Returns:
468
pandas.DataFrame: MCMC samples
469
"""
470
471
def generate_sample(self, start_state=None, size=1, seed=None, include_latents=False):
472
"""Generate single sample from current state."""
473
```
474
475
### Base Inference Class
476
477
Common interface for all inference algorithms.
478
479
```python { .api }
480
class Inference:
481
def __init__(self, model):
482
"""
483
Base class for all inference algorithms.
484
485
Parameters:
486
- model: probabilistic graphical model
487
"""
488
489
def query(self, variables, evidence=None):
490
"""
491
Abstract method for probability queries.
492
493
Parameters:
494
- variables: list of query variables
495
- evidence: dict of evidence
496
497
Returns:
498
DiscreteFactor: Query result
499
"""
500
501
def map_query(self, variables=None, evidence=None):
502
"""
503
Abstract method for MAP queries.
504
505
Parameters:
506
- variables: list of MAP variables
507
- evidence: dict of evidence
508
509
Returns:
510
dict: MAP assignment
511
"""
512
```
513
514
## Usage Examples
515
516
### Exact Inference with Variable Elimination
517
518
```python
519
from pgmpy.models import DiscreteBayesianNetwork
520
from pgmpy.factors.discrete import TabularCPD
521
from pgmpy.inference import VariableElimination
522
523
# Create model (assuming model is already created)
524
inference = VariableElimination(model)
525
526
# Query marginal probability P(C | A=1)
527
result = inference.query(variables=['C'], evidence={'A': 1})
528
print("P(C | A=1):")
529
print(result)
530
531
# Find MAP assignment for all unobserved variables
532
map_result = inference.map_query(evidence={'A': 1})
533
print("MAP assignment:", map_result)
534
```
535
536
### Belief Propagation on Junction Trees
537
538
```python
539
from pgmpy.inference import BeliefPropagation
540
541
# Initialize belief propagation
542
bp = BeliefPropagation(model)
543
544
# Calibrate the junction tree
545
bp.calibrate()
546
547
# Query after calibration
548
result = bp.query(['C'], evidence={'A': 1})
549
print("BP result:", result)
550
```
551
552
### Causal Inference
553
554
```python
555
from pgmpy.models import DiscreteBayesianNetwork
556
from pgmpy.inference import CausalInference
557
558
# Create causal model
559
causal_model = DiscreteBayesianNetwork([('Treatment', 'Outcome'),
560
('Confounder', 'Treatment'),
561
('Confounder', 'Outcome')])
562
563
# Initialize causal inference
564
causal_inf = CausalInference(causal_model)
565
566
# Estimate average treatment effect
567
ate = causal_inf.estimate_ate('Treatment', 'Outcome',
568
common_causes=['Confounder'])
569
print(f"Average Treatment Effect: {ate}")
570
```
571
572
### Sampling-Based Inference
573
574
```python
575
from pgmpy.sampling import BayesianModelSampling, GibbsSampling
576
577
# Forward sampling
578
sampler = BayesianModelSampling(model)
579
samples = sampler.forward_sample(size=1000)
580
581
# Rejection sampling with evidence
582
evidence_samples = sampler.rejection_sample(
583
evidence=[State('A', 1)], size=100
584
)
585
586
# Gibbs sampling for MCMC
587
gibbs = GibbsSampling(model)
588
mcmc_samples = gibbs.sample(size=1000)
589
```