or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

bayesian-sampling.mdindex.mdmodel-building.mdparameter-operations.mdresults-analysis.md

parameter-operations.mddocs/

0

# Parameter Operations

1

2

Advanced functionality for parameter space transformations, log probability calculations, and gradient computations. These operations provide low-level access to Stan's automatic differentiation capabilities and parameter space transformations.

3

4

## Capabilities

5

6

### Parameter Transformation

7

8

Transform parameters between constrained and unconstrained parameter spaces.

9

10

```python { .api }

11

def constrain_pars(self, unconstrained_parameters: Sequence[float], include_tparams: bool = True, include_gqs: bool = True) -> Sequence[float]:

12

"""

13

Transform a sequence of unconstrained parameters to their defined support,

14

optionally including transformed parameters and generated quantities.

15

16

Args:

17

unconstrained_parameters: A sequence of unconstrained parameters

18

include_tparams: Boolean to control whether we include transformed parameters. Default: True

19

include_gqs: Boolean to control whether we include generated quantities. Default: True

20

21

Returns:

22

Sequence[float]: A sequence of constrained parameters, optionally including transformed parameters

23

24

Notes:

25

- The unconstrained parameters are passed to the write_array method of the model_base instance

26

- See model_base.hpp in the Stan C++ library for details

27

- Parameter order matches constrained_param_names when include options are default

28

"""

29

30

def unconstrain_pars(self, constrained_parameters: Sequence[float]) -> Sequence[float]:

31

"""

32

Transform constrained parameters to unconstrained scale.

33

34

Converts parameters from their constrained (natural) scale to the unconstrained

35

scale used internally by Stan's samplers. This is the inverse of the

36

constraining transformation applied during sampling.

37

38

Args:

39

constrained_parameters: Sequence of parameter values on constrained scale

40

41

Returns:

42

Sequence[float]: Parameter values on unconstrained scale

43

44

Notes:

45

- Input parameter order matches constrained_param_names

46

- Output can be used with log_prob and grad_log_prob

47

- Transformations handle bounds, simplex, correlation matrices, etc.

48

"""

49

```

50

51

### Log Probability Evaluation

52

53

Calculate log probability density and its gradient at specific parameter values.

54

55

```python { .api }

56

def log_prob(self, unconstrained_parameters: Sequence[float], adjust_transform: bool = True) -> float:

57

"""

58

Calculate log probability density.

59

60

Evaluates the log probability density of the model at given parameter values.

61

This includes the log likelihood and log prior density.

62

63

Args:

64

unconstrained_parameters: Parameter values on unconstrained scale

65

adjust_transform: Whether to include Jacobian adjustment for parameter

66

transformations. Default: True

67

68

Returns:

69

float: Log probability density value

70

71

Notes:

72

- Parameters must be on unconstrained scale (use unconstrain_pars if needed)

73

- When adjust_transform=True, includes Jacobian determinant for transformations

74

- Used internally by sampling algorithms

75

- Useful for model comparison and custom inference algorithms

76

"""

77

78

def grad_log_prob(self, unconstrained_parameters: Sequence[float]) -> Sequence[float]:

79

"""

80

Calculate gradient of log probability density.

81

82

Computes the gradient (first derivatives) of the log probability density

83

with respect to unconstrained parameters using Stan's automatic differentiation.

84

85

Args:

86

unconstrained_parameters: Parameter values on unconstrained scale

87

88

Returns:

89

Sequence[float]: Gradient of log probability density with respect to each parameter

90

91

Notes:

92

- Parameters must be on unconstrained scale

93

- Returns gradient with respect to unconstrained parameters

94

- Used by gradient-based sampling algorithms like HMC-NUTS

95

- Computed using Stan's reverse-mode automatic differentiation

96

- The unconstrained parameters are passed to the log_prob_grad function in stan::model

97

"""

98

```

99

100

## Usage Examples

101

102

### Parameter Space Transformations

103

104

```python

105

import stan

106

import numpy as np

107

108

# Model with constrained parameters

109

program_code = """

110

parameters {

111

real<lower=0> sigma;

112

real<lower=-1, upper=1> rho;

113

simplex[3] theta;

114

}

115

model {

116

sigma ~ exponential(1);

117

rho ~ normal(0, 0.5);

118

theta ~ dirichlet([1, 1, 1]);

119

}

120

"""

121

122

model = stan.build(program_code)

123

124

# Example constrained parameter values

125

constrained_params = [

126

2.0, # sigma (positive)

127

0.3, # rho (between -1 and 1)

128

0.4, 0.3, 0.3 # theta (simplex: sums to 1)

129

]

130

131

# Transform to unconstrained scale

132

unconstrained_params = model.unconstrain_pars(constrained_params)

133

print(f"Unconstrained: {unconstrained_params}")

134

135

# Transform back to constrained scale

136

back_to_constrained = model.constrain_pars(unconstrained_params)

137

print(f"Back to constrained: {back_to_constrained}")

138

```

139

140

### Log Probability Evaluation

141

142

```python

143

import stan

144

import numpy as np

145

146

program_code = """

147

data {

148

int<lower=0> N;

149

vector[N] y;

150

}

151

parameters {

152

real mu;

153

real<lower=0> sigma;

154

}

155

model {

156

mu ~ normal(0, 10);

157

sigma ~ exponential(1);

158

y ~ normal(mu, sigma);

159

}

160

"""

161

162

# Generate synthetic data

163

N = 50

164

y = np.random.normal(2.0, 1.5, N)

165

data = {'N': N, 'y': y.tolist()}

166

167

model = stan.build(program_code, data=data)

168

169

# Evaluate log probability at specific parameter values

170

constrained_params = [2.0, 1.5] # mu=2.0, sigma=1.5

171

unconstrained_params = model.unconstrain_pars(constrained_params)

172

173

# Calculate log probability

174

log_prob = model.log_prob(unconstrained_params)

175

print(f"Log probability: {log_prob}")

176

177

# Calculate log probability without transformation adjustment

178

log_prob_no_adjust = model.log_prob(unconstrained_params, adjust_transform=False)

179

print(f"Log probability (no adjustment): {log_prob_no_adjust}")

180

181

# Calculate gradient

182

gradient = model.grad_log_prob(unconstrained_params)

183

print(f"Gradient: {gradient}")

184

```

185

186

### Custom Optimization

187

188

```python

189

import stan

190

import numpy as np

191

from scipy.optimize import minimize

192

193

program_code = """

194

data {

195

int<lower=0> N;

196

vector[N] x;

197

vector[N] y;

198

}

199

parameters {

200

real alpha;

201

real beta;

202

real<lower=0> sigma;

203

}

204

model {

205

alpha ~ normal(0, 10);

206

beta ~ normal(0, 10);

207

sigma ~ exponential(1);

208

y ~ normal(alpha + beta * x, sigma);

209

}

210

"""

211

212

# Generate regression data

213

N = 100

214

x = np.random.normal(0, 1, N)

215

y = 1.5 + 2.0 * x + np.random.normal(0, 0.8, N)

216

217

data = {'N': N, 'x': x.tolist(), 'y': y.tolist()}

218

model = stan.build(program_code, data=data)

219

220

# Define objective function for optimization

221

def neg_log_prob(unconstrained_params):

222

return -model.log_prob(unconstrained_params)

223

224

def neg_grad_log_prob(unconstrained_params):

225

return -model.grad_log_prob(unconstrained_params)

226

227

# Starting point (unconstrained scale)

228

initial_constrained = [0.0, 0.0, 1.0] # alpha, beta, sigma

229

initial_unconstrained = model.unconstrain_pars(initial_constrained)

230

231

# Optimize using gradient information

232

result = minimize(

233

neg_log_prob,

234

initial_unconstrained,

235

jac=neg_grad_log_prob,

236

method='BFGS'

237

)

238

239

# Transform result back to constrained scale

240

optimal_constrained = model.constrain_pars(result.x)

241

print(f"Optimal parameters: alpha={optimal_constrained[0]:.3f}, "

242

f"beta={optimal_constrained[1]:.3f}, sigma={optimal_constrained[2]:.3f}")

243

```

244

245

### Model Comparison

246

247

```python

248

import stan

249

import numpy as np

250

251

# Compare two models using log probability

252

program_code_1 = """

253

data {

254

int<lower=0> N;

255

vector[N] y;

256

}

257

parameters {

258

real mu;

259

real<lower=0> sigma;

260

}

261

model {

262

mu ~ normal(0, 1);

263

sigma ~ exponential(1);

264

y ~ normal(mu, sigma);

265

}

266

"""

267

268

program_code_2 = """

269

data {

270

int<lower=0> N;

271

vector[N] y;

272

}

273

parameters {

274

real<lower=0> lambda;

275

}

276

model {

277

lambda ~ gamma(2, 1);

278

y ~ exponential(lambda);

279

}

280

"""

281

282

# Data (clearly not exponential)

283

y = np.random.normal(2.0, 1.0, 100)

284

data = {'N': len(y), 'y': y.tolist()}

285

286

# Build both models

287

model1 = stan.build(program_code_1, data=data)

288

model2 = stan.build(program_code_2, data=data)

289

290

# Evaluate at reasonable parameter values

291

params1_const = [2.0, 1.0] # mu, sigma

292

params1_uncon = model1.unconstrain_pars(params1_const)

293

log_prob1 = model1.log_prob(params1_uncon)

294

295

params2_const = [0.5] # lambda

296

params2_uncon = model2.unconstrain_pars(params2_const)

297

log_prob2 = model2.log_prob(params2_uncon)

298

299

print(f"Normal model log probability: {log_prob1:.2f}")

300

print(f"Exponential model log probability: {log_prob2:.2f}")

301

print(f"Normal model fits better: {log_prob1 > log_prob2}")

302

```