CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/pypi-green

Green is a clean, colorful, fast python test runner.

Pending
Overview
Eval results
Files

junit-xml.mddocs/

JUnit XML Reporting

Green provides comprehensive JUnit XML report generation for CI/CD integration, enabling seamless integration with continuous integration systems that consume JUnit-formatted test results.

Capabilities

JUnit XML Generator

Main class for generating JUnit XML reports from Green test results.

class JUnitXML:
    """
    JUnit XML report generator for Green test results.
    
    Generates XML reports compatible with CI/CD systems like Jenkins,
    GitHub Actions, GitLab CI, Azure Pipelines, and other systems
    that consume JUnit XML format.
    """
    
    @staticmethod
    def save_as(test_results, destination):
        """
        Generate and save JUnit XML report from test results.
        
        Args:
            test_results: Green test results object containing test outcomes
            destination: File-like object or file path for XML output
        
        Generates XML report with comprehensive test metadata including:
        - Test suite organization
        - Individual test results (pass/fail/error/skip)
        - Timing information
        - Error details and stack traces
        - Test properties and metadata
        
        Example:
            from green.junit import JUnitXML
            from green.runner import run
            from green.loader import GreenTestLoader
            from green.output import GreenStream
            import sys
            
            # Run tests
            loader = GreenTestLoader()
            suite = loader.loadTargets(['tests/'])
            stream = GreenStream(sys.stdout)
            result = run(suite, stream, args)
            
            # Generate JUnit XML report
            with open('test-results.xml', 'w') as f:
                JUnitXML.save_as(result, f)
        """

JUnit Dialect

XML element name constants for JUnit XML formatting.

class JUnitDialect:
    """
    Constants for JUnit XML element names and attributes.
    
    Defines the XML structure and naming conventions used in
    JUnit XML reports for consistent formatting.
    """
    # XML element and attribute names used in JUnit format
    # (Implementation details for XML generation)

Test Verdict Enumeration

Enumeration of possible test outcomes for JUnit reporting.

class Verdict:
    """
    Enumeration of test verdicts for JUnit XML reporting.
    
    Defines the possible outcomes for individual tests in JUnit format.
    """
    PASSED = "passed"
    FAILED = "failed"  
    ERROR = "error"
    SKIPPED = "skipped"

Usage Examples

Basic JUnit XML Generation

from green.junit import JUnitXML
from green.cmdline import main
from green.config import parseArguments, mergeConfig
from green.loader import GreenTestLoader
from green.runner import run
from green.output import GreenStream
import sys

# Configure for JUnit XML output
args = parseArguments(['tests/', '--junit-report', 'test-results.xml'])
config = mergeConfig(args)

# Run tests
loader = GreenTestLoader()
suite = loader.loadTargets(config.targets)
stream = GreenStream(sys.stdout)
result = run(suite, stream, config)

# JUnit XML report is automatically generated at specified path
print(f"JUnit XML report saved to: {config.junit_report}")

Programmatic JUnit XML Generation

from green.junit import JUnitXML
from green.runner import run
from green.loader import GreenTestLoader
from green.output import GreenStream
from green.config import get_default_args
import sys
import io

# Run tests
args = get_default_args()
loader = GreenTestLoader()
suite = loader.loadTargets(['tests/'])
stream = GreenStream(sys.stdout)
result = run(suite, stream, args)

# Generate JUnit XML to string
xml_output = io.StringIO()
JUnitXML.save_as(result, xml_output)
xml_content = xml_output.getvalue()

# Save to file
with open('custom-results.xml', 'w') as f:
    f.write(xml_content)

print("JUnit XML report generated programmatically")

CI/CD Integration Examples

GitHub Actions

# .github/workflows/test.yml
name: Tests
on: [push, pull_request]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    
    - name: Set up Python
      uses: actions/setup-python@v2
      with:
        python-version: 3.9
    
    - name: Install dependencies
      run: |
        pip install -r requirements.txt
        pip install green
    
    - name: Run tests with Green
      run: |
        green --junit-report test-results.xml --run-coverage tests/
    
    - name: Upload test results
      uses: actions/upload-artifact@v2
      if: always()
      with:
        name: test-results
        path: test-results.xml
    
    - name: Publish test results
      uses: dorny/test-reporter@v1
      if: always()
      with:
        name: Test Results
        path: test-results.xml
        reporter: java-junit

Jenkins Pipeline

// Jenkinsfile
pipeline {
    agent any
    
    stages {
        stage('Test') {
            steps {
                sh '''
                    pip install -r requirements.txt
                    pip install green
                    green --junit-report test-results.xml --run-coverage tests/
                '''
            }
            post {
                always {
                    junit 'test-results.xml'
                    publishHTML([
                        allowMissing: false,
                        alwaysLinkToLastBuild: true,
                        keepAll: true,
                        reportDir: 'htmlcov',
                        reportFiles: 'index.html',
                        reportName: 'Coverage Report'
                    ])
                }
            }
        }
    }
}

GitLab CI

# .gitlab-ci.yml
test:
  stage: test
  image: python:3.9
  before_script:
    - pip install -r requirements.txt
    - pip install green
  script:
    - green --junit-report test-results.xml --run-coverage tests/
  artifacts:
    when: always
    reports:
      junit: test-results.xml
    paths:
      - htmlcov/
    expire_in: 1 week

Azure Pipelines

# azure-pipelines.yml
trigger:
- main

pool:
  vmImage: 'ubuntu-latest'

steps:
- task: UsePythonVersion@0
  inputs:
    versionSpec: '3.9'
    addToPath: true

- script: |
    pip install -r requirements.txt
    pip install green
  displayName: 'Install dependencies'

- script: |
    green --junit-report test-results.xml --run-coverage tests/
  displayName: 'Run tests'

- task: PublishTestResults@2
  condition: always()
  inputs:
    testResultsFiles: 'test-results.xml'
    testRunTitle: 'Python Tests'

- task: PublishCodeCoverageResults@1
  inputs:
    codeCoverageTool: 'Cobertura'
    summaryFileLocation: 'coverage.xml'
    reportDirectory: 'htmlcov'

Custom JUnit XML Processing

from green.junit import JUnitXML, Verdict
from green.result import GreenTestResult
import xml.etree.ElementTree as ET

class CustomJUnitXML(JUnitXML):
    """Custom JUnit XML generator with additional metadata."""
    
    @staticmethod
    def save_as(test_results, destination):
        """Generate JUnit XML with custom enhancements."""
        # Generate standard JUnit XML
        JUnitXML.save_as(test_results, destination)
        
        # Parse and enhance the XML
        tree = ET.parse(destination)
        root = tree.getroot()
        
        # Add custom properties
        properties = ET.SubElement(root, 'properties')
        
        # Add environment information
        env_prop = ET.SubElement(properties, 'property')
        env_prop.set('name', 'python.version')
        env_prop.set('value', sys.version)
        
        # Add Green version
        green_prop = ET.SubElement(properties, 'property')
        green_prop.set('name', 'green.version')
        green_prop.set('value', green.__version__)
        
        # Save enhanced XML
        tree.write(destination, encoding='utf-8', xml_declaration=True)

# Usage
result = run(suite, stream, args)
CustomJUnitXML.save_as(result, 'enhanced-results.xml')

JUnit XML Analysis

import xml.etree.ElementTree as ET
from green.junit import Verdict

def analyze_junit_xml(xml_file):
    """Analyze JUnit XML report and extract metrics."""
    tree = ET.parse(xml_file)
    root = tree.getroot()
    
    # Extract test suite metrics
    total_tests = int(root.get('tests', 0))
    failures = int(root.get('failures', 0))
    errors = int(root.get('errors', 0))
    skipped = int(root.get('skipped', 0))
    time = float(root.get('time', 0))
    
    # Calculate derived metrics
    passed = total_tests - failures - errors - skipped
    success_rate = (passed / total_tests * 100) if total_tests > 0 else 0
    
    print(f"Test Results Analysis:")
    print(f"  Total Tests: {total_tests}")
    print(f"  Passed: {passed}")
    print(f"  Failed: {failures}")
    print(f"  Errors: {errors}")
    print(f"  Skipped: {skipped}")
    print(f"  Success Rate: {success_rate:.1f}%")
    print(f"  Total Time: {time:.3f}s")
    
    # Analyze individual test cases
    slow_tests = []
    failed_tests = []
    
    for testcase in root.findall('.//testcase'):
        test_name = f"{testcase.get('classname')}.{testcase.get('name')}"
        test_time = float(testcase.get('time', 0))
        
        # Track slow tests (>1 second)
        if test_time > 1.0:
            slow_tests.append((test_name, test_time))
        
        # Track failed tests
        if testcase.find('failure') is not None or testcase.find('error') is not None:
            failed_tests.append(test_name)
    
    if slow_tests:
        print(f"\nSlow Tests (>1s):")
        for test_name, test_time in sorted(slow_tests, key=lambda x: x[1], reverse=True):
            print(f"  {test_name}: {test_time:.3f}s")
    
    if failed_tests:
        print(f"\nFailed Tests:")
        for test_name in failed_tests:
            print(f"  {test_name}")
    
    return {
        'total': total_tests,
        'passed': passed,
        'failed': failures,
        'errors': errors,
        'skipped': skipped,
        'success_rate': success_rate,
        'total_time': time,
        'slow_tests': slow_tests,
        'failed_tests': failed_tests
    }

# Usage
metrics = analyze_junit_xml('test-results.xml')

JUnit XML Format

Standard JUnit XML Structure

<?xml version="1.0" encoding="UTF-8"?>
<testsuite name="Green Test Suite" 
           tests="10" 
           failures="1" 
           errors="0" 
           skipped="1" 
           time="2.345">
    
    <properties>
        <property name="platform" value="linux"/>
        <property name="python.version" value="3.9.7"/>
    </properties>
    
    <testcase classname="tests.test_auth.AuthTest" 
              name="test_login_success" 
              time="0.123"/>
    
    <testcase classname="tests.test_auth.AuthTest" 
              name="test_login_failure" 
              time="0.456">
        <failure message="AssertionError: Login should fail" 
                 type="AssertionError">
            <![CDATA[
            Traceback (most recent call last):
              File "tests/test_auth.py", line 25, in test_login_failure
                self.assertFalse(result.success)
            AssertionError: Login should fail
            ]]>
        </failure>
    </testcase>
    
    <testcase classname="tests.test_models.UserTest" 
              name="test_user_creation" 
              time="0.089">
        <skipped message="Database not available"/>
    </testcase>
    
</testsuite>

Green-Specific Enhancements

Green's JUnit XML reports include:

  • Accurate timing: Precise test execution times
  • Proper test organization: Test suites organized by module/class hierarchy
  • Detailed error information: Complete stack traces and error messages
  • Test metadata: Test descriptions, docstrings, and properties
  • Process information: Parallel execution metadata when applicable

Best Practices

File Naming and Organization

  • Use descriptive XML file names: test-results.xml, unit-test-results.xml
  • Store reports in consistent locations for CI/CD pickup
  • Include timestamps in file names for historical tracking

CI/CD Integration

  • Always generate JUnit XML in CI/CD pipelines
  • Use if: always() conditions to generate reports even on test failures
  • Configure proper artifact retention for test reports
  • Set up test result visualization in your CI/CD platform

Report Analysis

  • Monitor test execution times for performance regressions
  • Track test success rates over time
  • Identify flaky tests through historical analysis
  • Use reports to guide test optimization efforts

Install with Tessl CLI

npx tessl i tessl/pypi-green

docs

cli.md

configuration.md

django-integration.md

index.md

junit-xml.md

output-formatting.md

setuptools-integration.md

test-execution.md

test-loading.md

test-results.md

tile.json