Allure pytest integration that generates comprehensive test reports with rich metadata and visual test execution tracking
—
Quality
Pending
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Filter and select tests for execution based on Allure labels, metadata, and test plans. This enables targeted test runs and organized test execution strategies.
Filter tests by severity levels to run only tests of specific importance or priority.
--allure-severities SEVERITIES_SET
"""
Run tests that have at least one of the specified severity labels.
Parameters:
- SEVERITIES_SET: Comma-separated list of severity names
Valid severity values:
- blocker: Highest priority, blocks major functionality
- critical: High priority, affects core features
- normal: Standard priority (default if no severity specified)
- minor: Low priority, minor issues
- trivial: Lowest priority, cosmetic issues
Usage:
pytest --allure-severities blocker,critical tests/
"""Filter tests using Behavior-Driven Development labels for organizing tests by epics, features, and stories.
--allure-epics EPICS_SET
"""
Run tests that have at least one of the specified epic labels.
Parameters:
- EPICS_SET: Comma-separated list of epic names
Usage:
pytest --allure-epics "User Management,Payment Processing" tests/
"""
--allure-features FEATURES_SET
"""
Run tests that have at least one of the specified feature labels.
Parameters:
- FEATURES_SET: Comma-separated list of feature names
Usage:
pytest --allure-features "Login,Registration,Password Reset" tests/
"""
--allure-stories STORIES_SET
"""
Run tests that have at least one of the specified story labels.
Parameters:
- STORIES_SET: Comma-separated list of story names
Usage:
pytest --allure-stories "User can login,User can logout" tests/
"""Filter tests by test case IDs for running specific test cases or test subsets.
--allure-ids IDS_SET
"""
Run tests that have at least one of the specified ID labels.
Parameters:
- IDS_SET: Comma-separated list of test IDs
Usage:
pytest --allure-ids TC001,TC002,TC005 tests/
"""Filter tests by custom labels for flexible test organization and selection strategies.
--allure-label LABEL_NAME=values
"""
Run tests that have at least one of the specified labels.
Parameters:
- LABEL_NAME: Name of the custom label type
- values: Comma-separated list of label values
Can be specified multiple times for different label types.
Usage:
pytest --allure-label component=auth,payments --allure-label priority=P1 tests/
"""# Run only critical and blocker tests
pytest --allure-severities critical,blocker tests/
# Run tests for specific features
pytest --allure-features "User Authentication,Profile Management" tests/
# Run specific test cases by ID
pytest --allure-ids TC001,TC002,TC003 tests/# Combine multiple filter types (AND logic within types, OR logic between types)
pytest \
--allure-severities critical,blocker \
--allure-features "Payment Processing" \
--allure-label component=backend \
tests/
# Custom labels for complex filtering
pytest \
--allure-label priority=P1,P2 \
--allure-label team=auth \
--allure-label environment=staging \
tests/# Epic-level testing
pytest --allure-epics "E-commerce Platform" tests/
# Feature development cycle
pytest --allure-features "Shopping Cart,Checkout Process" tests/
# Story-level validation
pytest --allure-stories "User can add items to cart,User can remove items from cart" tests/Tests must have appropriate labels assigned to be selected by filters:
import pytest
import allure
# Using allure decorators
@allure.severity(allure.severity_level.CRITICAL)
@allure.epic("User Management")
@allure.feature("Authentication")
@allure.story("User Login")
@allure.testcase("TC001")
def test_user_login():
pass
# Using pytest markers
@pytest.mark.allure_label(label_type="severity", "critical")
@pytest.mark.allure_label(label_type="component", "auth")
@pytest.mark.allure_label(label_type="priority", "P1")
def test_password_reset():
passInstall with Tessl CLI
npx tessl i tessl/pypi-allure-pytest