Allure behave integration that provides comprehensive test reporting and visualization for the Behave BDD testing framework
—
Quality
Pending
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
The hooks integration provides an alternative method for integrating Allure Behave that works with parallel execution and custom environment setups. It automatically wraps your existing behave hooks to add Allure reporting capabilities.
Main public API function that sets up Allure reporting via behave hooks. Call this function at the module level in your environment.py file.
def allure_report(result_dir="allure_results"):
"""
Set up Allure reporting using behave hooks integration.
This function automatically wraps existing behave hooks in the calling scope
and adds new hooks if they don't exist. Must be called at module level.
Parameters:
- result_dir (str): Directory where Allure results will be written.
Defaults to "allure_results"
Returns:
None
Side Effects:
- Modifies hook functions in the calling frame's locals
- Registers Allure plugins with allure_commons
- Creates result directory if it doesn't exist
"""Internal class that implements the actual behave hook methods for Allure integration. This class is used internally by allure_report() and typically doesn't need to be used directly.
class AllureHooks:
def __init__(self, result_dir):
"""
Initialize Allure hooks integration.
Parameters:
- result_dir (str): Directory for Allure result files
"""
def after_all(self, context):
"""
Clean up Allure plugins after all tests complete.
Parameters:
- context: Behave context object
"""
def before_feature(self, context, feature):
"""
Start processing a new feature file.
Parameters:
- context: Behave context object
- feature: Behave feature object
"""
def after_feature(self, context, feature):
"""
Complete processing of a feature file.
Parameters:
- context: Behave context object
- feature: Behave feature object
"""
def before_scenario(self, context, scenario):
"""
Start processing a scenario.
Parameters:
- context: Behave context object
- scenario: Behave scenario object
"""
def after_scenario(self, context, scenario):
"""
Complete processing of a scenario.
Parameters:
- context: Behave context object
- scenario: Behave scenario object
"""
def before_step(self, context, step):
"""
Start processing a step.
Parameters:
- context: Behave context object
- step: Behave step object
"""
def after_step(self, context, step):
"""
Complete processing of a step.
Parameters:
- context: Behave context object
- step: Behave step object with execution result
"""Create or modify your environment.py file:
# environment.py
from allure_behave.hooks import allure_report
# Set up Allure reporting - call this at module level
allure_report("allure_results")
# Your existing hooks will be automatically wrapped
def before_all(context):
# Your setup code
print("Setting up test environment")
def after_all(context):
# Your cleanup code
print("Cleaning up test environment")
def before_feature(context, feature):
# Your feature setup
context.feature_data = {}
def after_feature(context, feature):
# Your feature cleanup
del context.feature_data# environment.py
from allure_behave.hooks import allure_report
# Custom result directory
allure_report("/path/to/custom/results")
# Rest of your environment setup...For use with behave-parallel or similar tools:
# environment.py
from allure_behave.hooks import allure_report
import os
# Use unique directory per process for parallel execution
process_id = os.getpid()
allure_report(f"allure_results_{process_id}")
def before_all(context):
# Process-specific setup
context.process_id = process_idThe hooks integration automatically wraps your existing hooks:
# environment.py
from allure_behave.hooks import allure_report
# Set up Allure first
allure_report("results")
# Your existing hooks are automatically wrapped
def before_scenario(context, scenario):
# Your code runs first
setup_scenario_data(context, scenario)
# Then Allure processing happens automatically
def after_scenario(context, scenario):
# Your code runs first
cleanup_scenario_data(context, scenario)
# Then Allure processing happens automaticallyEnable Allure reporting conditionally:
# environment.py
import os
from allure_behave.hooks import allure_report
# Only enable Allure in CI or when explicitly requested
if os.getenv('CI') or os.getenv('ALLURE_RESULTS_DIR'):
result_dir = os.getenv('ALLURE_RESULTS_DIR', 'allure_results')
allure_report(result_dir)
def before_all(context):
context.allure_enabled = bool(os.getenv('CI') or os.getenv('ALLURE_RESULTS_DIR'))The allure_report() function automatically handles hook wrapping:
allure_report() multiple times in the same module is safe - it won't create duplicate wrappersThe hooks integration uses thread-local storage for shared state, making it safe for parallel execution scenarios where multiple processes or threads are running tests simultaneously.
| Feature | Hooks Integration | Formatter Integration |
|---|---|---|
| Parallel Execution | ✅ Supported | ❌ Not supported |
| Custom Environment | ✅ Full control | ⚠️ Limited |
| Setup Complexity | Medium | Low |
| CLI Integration | Requires environment.py | Direct |
| Performance | Slightly slower | Faster |
Install with Tessl CLI
npx tessl i tessl/pypi-allure-behave