0
# Security and Sandboxing
1
2
Security framework for safely executing untrusted templates by restricting access to dangerous Python operations and attributes. Includes configurable policies and safe alternatives for common operations.
3
4
## Capabilities
5
6
### Sandboxed Environment
7
8
Restricted template execution environment that prevents access to unsafe Python operations while maintaining template functionality.
9
10
```python { .api }
11
class SandboxedEnvironment(Environment):
12
def __init__(self, **options):
13
"""
14
Initialize sandboxed environment with security restrictions.
15
16
Inherits all Environment parameters and adds security policies:
17
- Restricts attribute access to safe attributes only
18
- Limits range() function to MAX_RANGE items
19
- Prevents access to private/internal attributes
20
- Blocks dangerous method calls
21
22
Parameters:
23
**options: Same as Environment, plus security-specific options
24
"""
25
```
26
27
Usage example:
28
29
```python
30
from jinja2.sandbox import SandboxedEnvironment
31
from jinja2 import DictLoader
32
33
# Create sandboxed environment
34
env = SandboxedEnvironment(
35
loader=DictLoader({
36
'user_template': '''
37
Hello {{ user.name }}!
38
Your items: {{ items | join(', ') }}
39
Range: {{ range(5) | list }}
40
'''
41
})
42
)
43
44
# Safe template rendering with untrusted input
45
template = env.from_string('Hello {{ name | upper }}!')
46
result = template.render(name='<script>alert("xss")</script>')
47
# Result: Hello <SCRIPT>ALERT("XSS")</SCRIPT>!
48
```
49
50
### Security Functions
51
52
Utility functions for implementing and customizing sandbox security policies.
53
54
```python { .api }
55
def safe_range(*args):
56
"""
57
Safe range function with configurable limits.
58
59
Parameters:
60
*args: Same as built-in range() function
61
62
Returns:
63
range: Range object limited to MAX_RANGE items
64
65
Raises:
66
OverflowError: If range exceeds MAX_RANGE limit
67
"""
68
69
def is_internal_attribute(obj, attr):
70
"""
71
Check if attribute is considered internal/private.
72
73
Parameters:
74
obj: Object being accessed
75
attr: Attribute name
76
77
Returns:
78
bool: True if attribute should be blocked
79
"""
80
81
def modifies_known_mutable(obj, attr):
82
"""
83
Check if accessing attribute/method modifies known mutable objects.
84
85
Parameters:
86
obj: Object being accessed
87
attr: Attribute/method name
88
89
Returns:
90
bool: True if access modifies the object
91
"""
92
```
93
94
### Security Constants
95
96
Configuration constants for controlling sandbox behavior and limits.
97
98
```python { .api }
99
MAX_RANGE = 100000 # Maximum range size allowed
100
101
UNSAFE_FUNCTION_ATTRIBUTES = frozenset([
102
'__call__', '__code__', '__closure__', '__defaults__',
103
'__globals__', '__dict__', '__annotations__'
104
])
105
106
UNSAFE_METHOD_ATTRIBUTES = frozenset([
107
'__func__', '__self__', '__code__', '__closure__',
108
'__defaults__', '__globals__'
109
])
110
111
UNSAFE_GENERATOR_ATTRIBUTES = frozenset([
112
'gi_frame', 'gi_code', 'gi_locals', 'gi_yieldfrom'
113
])
114
115
UNSAFE_COROUTINE_ATTRIBUTES = frozenset([
116
'cr_frame', 'cr_code', 'cr_locals', 'cr_origin'
117
])
118
119
UNSAFE_ASYNC_GENERATOR_ATTRIBUTES = frozenset([
120
'ag_frame', 'ag_code', 'ag_locals'
121
])
122
```
123
124
## Custom Security Policies
125
126
### Extending Sandbox Security
127
128
Customize sandbox behavior by overriding security methods:
129
130
```python
131
from jinja2.sandbox import SandboxedEnvironment
132
133
class CustomSandboxedEnvironment(SandboxedEnvironment):
134
def is_safe_attribute(self, obj, attr, value):
135
"""
136
Override attribute safety checking.
137
138
Parameters:
139
obj: Object being accessed
140
attr: Attribute name
141
value: Attribute value
142
143
Returns:
144
bool: True if attribute access is safe
145
"""
146
# Allow access to specific safe attributes
147
if attr in ('safe_method', 'allowed_property'):
148
return True
149
150
# Block access to sensitive attributes
151
if attr.startswith('_secret'):
152
return False
153
154
# Delegate to parent implementation
155
return super().is_safe_attribute(obj, attr, value)
156
157
def is_safe_callable(self, obj):
158
"""
159
Override callable safety checking.
160
161
Parameters:
162
obj: Callable object
163
164
Returns:
165
bool: True if callable is safe to execute
166
"""
167
# Allow specific safe functions
168
if hasattr(obj, '__name__') and obj.__name__ in ('safe_func', 'allowed_func'):
169
return True
170
171
# Block dangerous callables
172
if hasattr(obj, '__name__') and obj.__name__ in ('exec', 'eval', 'compile'):
173
return False
174
175
return super().is_safe_callable(obj)
176
177
# Usage
178
env = CustomSandboxedEnvironment()
179
```
180
181
### Allowlist-based Security
182
183
Implement strict allowlist-based security policies:
184
185
```python
186
class AllowlistSandboxedEnvironment(SandboxedEnvironment):
187
# Define allowed attributes per type
188
ALLOWED_ATTRIBUTES = {
189
str: {'upper', 'lower', 'strip', 'split', 'join', 'replace'},
190
list: {'append', 'extend', 'count', 'index'},
191
dict: {'get', 'keys', 'values', 'items'},
192
# Add more types as needed
193
}
194
195
# Define allowed global functions
196
ALLOWED_FUNCTIONS = {
197
'len', 'abs', 'min', 'max', 'sum', 'sorted', 'reversed'
198
}
199
200
def is_safe_attribute(self, obj, attr, value):
201
obj_type = type(obj)
202
allowed = self.ALLOWED_ATTRIBUTES.get(obj_type, set())
203
return attr in allowed
204
205
def is_safe_callable(self, obj):
206
if hasattr(obj, '__name__'):
207
return obj.__name__ in self.ALLOWED_FUNCTIONS
208
return False
209
```
210
211
### Context-aware Security
212
213
Implement security policies that consider template context:
214
215
```python
216
class ContextAwareSandboxedEnvironment(SandboxedEnvironment):
217
def __init__(self, **options):
218
super().__init__(**options)
219
self.security_context = {
220
'trusted_users': set(),
221
'admin_mode': False
222
}
223
224
def is_safe_attribute(self, obj, attr, value):
225
# More permissive for trusted users
226
if self.security_context.get('admin_mode'):
227
return not attr.startswith('__')
228
229
# Strict checking for untrusted users
230
return super().is_safe_attribute(obj, attr, value)
231
232
def set_security_context(self, **context):
233
"""Update security context for current request."""
234
self.security_context.update(context)
235
236
# Usage
237
env = ContextAwareSandboxedEnvironment()
238
239
# For trusted admin user
240
env.set_security_context(admin_mode=True)
241
template = env.from_string('{{ obj.admin_method() }}')
242
243
# For regular user
244
env.set_security_context(admin_mode=False)
245
template = env.from_string('{{ obj.safe_method() }}')
246
```
247
248
## Security Best Practices
249
250
### Input Validation
251
252
Validate and sanitize template inputs before rendering:
253
254
```python
255
from jinja2.sandbox import SandboxedEnvironment
256
from markupsafe import escape
257
import re
258
259
def sanitize_input(value):
260
"""Sanitize user input for safe template rendering."""
261
if isinstance(value, str):
262
# Remove potentially dangerous characters
263
value = re.sub(r'[<>"\']', '', value)
264
# Limit length
265
value = value[:1000]
266
return value
267
268
def safe_render(template_str, **context):
269
"""Safely render template with sanitized context."""
270
env = SandboxedEnvironment(autoescape=True)
271
272
# Sanitize all context values
273
safe_context = {}
274
for key, value in context.items():
275
if isinstance(value, (str, int, float, bool, list, dict)):
276
safe_context[key] = sanitize_input(value)
277
278
template = env.from_string(template_str)
279
return template.render(**safe_context)
280
```
281
282
### Template Source Validation
283
284
Validate template sources to prevent malicious template injection:
285
286
```python
287
def validate_template_source(source):
288
"""
289
Validate template source for security issues.
290
291
Returns:
292
tuple: (is_safe, issues) where is_safe is bool and issues is list of problems
293
"""
294
issues = []
295
296
# Check for dangerous patterns
297
dangerous_patterns = [
298
r'\{%\s*include\s+["\'][^"\']*/\.\./', # Path traversal
299
r'\{%\s*extends\s+["\'][^"\']*/\.\./', # Path traversal
300
r'__[a-zA-Z_]+__', # Dunder attributes
301
r'\bexec\b|\beval\b|\bcompile\b', # Dangerous functions
302
]
303
304
for pattern in dangerous_patterns:
305
if re.search(pattern, source):
306
issues.append(f'Potentially dangerous pattern: {pattern}')
307
308
# Check template size
309
if len(source) > 50000: # 50KB limit
310
issues.append('Template source too large')
311
312
return len(issues) == 0, issues
313
314
# Usage
315
source = '{{ user.name }} - {{ items | length }}'
316
is_safe, issues = validate_template_source(source)
317
if is_safe:
318
env = SandboxedEnvironment()
319
template = env.from_string(source)
320
else:
321
print(f'Template validation failed: {issues}')
322
```
323
324
### Resource Limiting
325
326
Implement resource limits to prevent denial-of-service attacks:
327
328
```python
329
import signal
330
import time
331
from contextlib import contextmanager
332
333
class ResourceLimitedSandboxedEnvironment(SandboxedEnvironment):
334
def __init__(self, max_execution_time=5, **options):
335
super().__init__(**options)
336
self.max_execution_time = max_execution_time
337
338
@contextmanager
339
def execution_timeout(self):
340
"""Context manager for limiting execution time."""
341
def timeout_handler(signum, frame):
342
raise TimeoutError('Template execution timeout')
343
344
old_handler = signal.signal(signal.SIGALRM, timeout_handler)
345
signal.alarm(self.max_execution_time)
346
347
try:
348
yield
349
finally:
350
signal.alarm(0)
351
signal.signal(signal.SIGALRM, old_handler)
352
353
def render_with_limits(self, template_str, **context):
354
"""Render template with resource limits."""
355
template = self.from_string(template_str)
356
357
with self.execution_timeout():
358
return template.render(**context)
359
360
# Usage
361
env = ResourceLimitedSandboxedEnvironment(max_execution_time=3)
362
try:
363
result = env.render_with_limits('{{ range(1000000) | list | length }}') # This might timeout
364
except TimeoutError:
365
print('Template execution took too long')
366
```
367
368
## Types
369
370
```python { .api }
371
class SecurityPolicy:
372
"""
373
Security policy configuration for sandboxed environments.
374
375
Attributes:
376
max_range_size: Maximum allowed range size
377
allowed_attributes: Set of allowed attribute names
378
blocked_attributes: Set of blocked attribute names
379
allowed_functions: Set of allowed function names
380
blocked_functions: Set of blocked function names
381
enable_auto_escape: Enable automatic HTML escaping
382
"""
383
384
class SecurityViolation(Exception):
385
"""
386
Exception raised when sandbox security policy is violated.
387
388
Attributes:
389
object: Object that caused the violation
390
attribute: Attribute that was accessed (if applicable)
391
message: Detailed violation message
392
"""
393
394
class SandboxContext:
395
"""
396
Security context for sandbox execution.
397
398
Attributes:
399
user_id: Current user identifier
400
permissions: Set of user permissions
401
trust_level: User trust level (0-100)
402
source_origin: Origin of template source
403
"""
404
```