CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/pypi-green

Green is a clean, colorful, fast python test runner.

Pending
Overview
Eval results
Files

test-results.mddocs/

Test Results and Reporting

Green's comprehensive test result system provides detailed result collection, aggregation from multiple processes, and support for various output formats with rich metadata and timing information.

Capabilities

Base Test Result

Base class providing common functionality for both ProtoTestResult and GreenTestResult, handling stdout/stderr capture and display.

class BaseTestResult:
    """
    Base class inherited by ProtoTestResult and GreenTestResult.
    
    Provides common functionality for capturing and displaying test output,
    managing stdout/stderr streams, and handling duration collection (Python 3.12+).
    """
    
    def __init__(self, stream: GreenStream | None, *, colors: Colors | None = None):
        """
        Initialize base test result.
        
        Args:
            stream: Output stream for result display
            colors: Color formatting instance
        """
    
    def recordStdout(self, test: RunnableTestT, output):
        """
        Record stdout output captured from a test.
        
        Args:
            test: The test case that produced output
            output: Captured stdout content
        """
    
    def recordStderr(self, test: RunnableTestT, errput):
        """
        Record stderr output captured from a test.
        
        Args:
            test: The test case that produced errput
            errput: Captured stderr content
        """
    
    def displayStdout(self, test: TestCaseT):
        """
        Display and remove captured stdout for a specific test.
        
        Args:
            test: Test case to display output for
        """
    
    def displayStderr(self, test: TestCaseT):
        """
        Display and remove captured stderr for a specific test.
        
        Args:
            test: Test case to display errput for
        """
    
    def addDuration(self, test: TestCaseT, elapsed: float):
        """
        Record test execution duration (New in Python 3.12).
        
        Args:
            test: The test case that finished
            elapsed: Execution time in seconds including cleanup
        """

Green Test Result

Main test result aggregator that collects and consolidates results from parallel test execution processes.

class GreenTestResult:
    """
    Main test result aggregator for Green's parallel execution.
    
    Collects results from multiple worker processes and provides
    comprehensive test outcome reporting with timing and metadata.
    """
    
    def __init__(self, args, stream):
        """
        Initialize test result collector.
        
        Args:
            args: Configuration namespace with execution parameters
            stream: Output stream for real-time result reporting
        """
    
    def addProtoTestResult(self, proto_test_result):
        """
        Add results from a worker process.
        
        Args:
            proto_test_result (ProtoTestResult): Results from subprocess execution
        
        This method aggregates results from parallel worker processes,
        combining test outcomes, timing data, and error information.
        
        Example:
            # Used internally by Green's parallel execution system
            main_result = GreenTestResult(args, stream)
            # Worker processes send their results back
            main_result.addProtoTestResult(worker_result)
        """
    
    def wasSuccessful(self):
        """
        Check if all tests passed without failures or errors.
        
        Returns:
            bool: True if no failures, errors, or unexpected successes occurred
        
        Example:
            if result.wasSuccessful():
                print("All tests passed!")
                exit(0)
            else:
                print("Some tests failed")
                exit(1)
        """
    
    def startTestRun(self):
        """
        Called before any tests are executed.
        
        Initializes result collection and prepares output formatting.
        """
    
    def stopTestRun(self):
        """
        Called after all tests have been executed.
        
        Finalizes result collection and produces summary output.
        """
    
    def startTest(self, test):
        """
        Called when an individual test is started.
        
        Args:
            test: Test case being started
        """
    
    def stopTest(self, test):
        """
        Called when an individual test is completed.
        
        Args:
            test: Test case that was completed
        """
    
    def addSuccess(self, test, test_time=None):
        """
        Record a successful test.
        
        Args:
            test: Test case that passed
            test_time (float, optional): Execution time in seconds
        """
    
    def addError(self, test, err, test_time=None):
        """
        Record a test error (exception during test execution).
        
        Args:
            test: Test case that had an error
            err: Exception information (exc_type, exc_value, exc_traceback)
            test_time (float, optional): Execution time in seconds
        """
    
    def addFailure(self, test, err, test_time=None):
        """
        Record a test failure (assertion failure).
        
        Args:
            test: Test case that failed
            err: Failure information (exc_type, exc_value, exc_traceback)
            test_time (float, optional): Execution time in seconds
        """
    
    def addSkip(self, test, reason, test_time=None):
        """
        Record a skipped test.
        
        Args:
            test: Test case that was skipped
            reason (str): Reason for skipping
            test_time (float, optional): Execution time in seconds
        """
    
    def addExpectedFailure(self, test, err, test_time=None):
        """
        Record an expected failure (test marked with @expectedFailure).
        
        Args:
            test: Test case with expected failure
            err: Failure information
            test_time (float, optional): Execution time in seconds
        """
    
    def addUnexpectedSuccess(self, test, test_time=None):
        """
        Record an unexpected success (expected failure that passed).
        
        Args:
            test: Test case that unexpectedly passed
            test_time (float, optional): Execution time in seconds
        """

Proto Test Result

Result container for individual subprocess test execution, designed for serialization and inter-process communication.

class ProtoTestResult:
    """
    Test result container for individual subprocess runs.
    
    Serializable result object that can be passed between processes
    to aggregate parallel test execution results.
    """
    
    def __init__(self, start_callback=None, finalize_callback=None):
        """
        Initialize subprocess result container.
        
        Args:
            start_callback (callable, optional): Function called when starting
            finalize_callback (callable, optional): Function called when finalizing
        """
    
    def reinitialize(self):
        """
        Reset result state for reuse.
        
        Clears all collected results while preserving configuration.
        Used when reusing result objects across multiple test runs.
        """
    
    def finalize(self):
        """
        Finalize result collection.
        
        Completes result processing and prepares for serialization
        back to the main process.
        """
    
    # Same test result methods as GreenTestResult:
    # addSuccess, addError, addFailure, addSkip, 
    # addExpectedFailure, addUnexpectedSuccess

Proto Test

Serializable representation of a test case for multiprocess execution.

class ProtoTest:
    """
    Serializable test representation for multiprocessing.
    
    Lightweight test representation that can be passed between processes
    while preserving test identity and metadata.
    """
    
    def __init__(self, test=None):
        """
        Create ProtoTest from a test case or doctest.
        
        Args:
            test: unittest.TestCase, doctest.DocTestCase, or similar test object
        
        Example:
            import unittest
            
            class MyTest(unittest.TestCase):
                def test_example(self): pass
            
            test_instance = MyTest('test_example')
            proto = ProtoTest(test_instance)
            print(proto.dotted_name)  # 'MyTest.test_example'
        """
    
    @property
    def dotted_name(self):
        """
        Full dotted name of the test.
        
        Returns:
            str: Fully qualified test name like 'module.TestClass.test_method'
        """
    
    @property  
    def module(self):
        """
        Module name containing the test.
        
        Returns:
            str: Module name like 'tests.test_auth'
        """
    
    @property
    def class_name(self):
        """
        Test class name.
        
        Returns:
            str: Class name like 'AuthenticationTest'
        """
    
    @property
    def method_name(self):
        """
        Test method name.
        
        Returns:
            str: Method name like 'test_login_success'
        """
    
    def getDescription(self, verbose):
        """
        Get formatted test description for output.
        
        Args:
            verbose (int): Verbosity level (0-4)
        
        Returns:
            str: Formatted description, may include docstring at higher verbosity
        
        Example:
            proto = ProtoTest(test)
            desc = proto.getDescription(verbose=2)
            # Returns method docstring if available, otherwise method name
        """

Proto Error

Serializable error representation for multiprocess communication.

class ProtoError:
    """
    Serializable error representation for multiprocessing.
    
    Captures exception information in a form that can be passed
    between processes while preserving traceback details.
    """
    
    def __init__(self, err):
        """
        Create ProtoError from exception information.
        
        Args:
            err: Exception info tuple (exc_type, exc_value, exc_traceback)
                or similar error information
        
        Example:
            try:
                # Some test code that fails
                assert False, "Test failure"
            except:
                import sys
                proto_err = ProtoError(sys.exc_info())
                print(str(proto_err))  # Formatted traceback
        """
    
    def __str__(self):
        """
        Get formatted traceback string.
        
        Returns:
            str: Human-readable traceback with file locations and error details
        """

Helper Functions

def proto_test(test):
    """
    Convert a test case to ProtoTest.
    
    Args:
        test: unittest.TestCase or similar test object
    
    Returns:
        ProtoTest: Serializable test representation
    """

def proto_error(err):
    """
    Convert error information to ProtoError.
    
    Args:
        err: Exception info tuple or error object
    
    Returns:
        ProtoError: Serializable error representation
    """

Usage Examples

Basic Result Usage

from green.result import GreenTestResult, ProtoTestResult
from green.output import GreenStream
from green.config import get_default_args
import sys

# Create main result collector
args = get_default_args()
stream = GreenStream(sys.stdout)
main_result = GreenTestResult(args, stream)

# Simulate adding results from worker processes
worker_result = ProtoTestResult()
# Worker would populate this with test outcomes...
main_result.addProtoTestResult(worker_result)

# Check final results
print(f"Tests run: {main_result.testsRun}")
print(f"Failures: {len(main_result.failures)}")
print(f"Errors: {len(main_result.errors)}")
print(f"Success: {main_result.wasSuccessful()}")

Working with Proto Objects

from green.result import ProtoTest, ProtoError, proto_test, proto_error
import unittest
import sys

# Create a test case
class SampleTest(unittest.TestCase):
    def test_example(self):
        """This is an example test."""
        self.assertEqual(1, 1)

# Convert to ProtoTest for multiprocessing
test_instance = SampleTest('test_example')
proto = proto_test(test_instance)

print(f"Test name: {proto.dotted_name}")
print(f"Module: {proto.module}")
print(f"Class: {proto.class_name}")
print(f"Method: {proto.method_name}")
print(f"Description: {proto.getDescription(verbose=2)}")

# Example error handling
try:
    raise ValueError("Example error")
except:
    error_info = sys.exc_info()
    proto_err = proto_error(error_info)
    print(f"Error: {proto_err}")

Custom Result Processing

from green.result import GreenTestResult

class CustomResultProcessor(GreenTestResult):
    """Custom result processor with additional reporting."""
    
    def __init__(self, args, stream):
        super().__init__(args, stream)
        self.custom_metrics = {}
    
    def addSuccess(self, test, test_time=None):
        super().addSuccess(test, test_time)
        # Custom success processing
        if test_time:
            self.custom_metrics[test] = test_time
    
    def stopTestRun(self):
        super().stopTestRun()
        # Custom reporting
        if self.custom_metrics:
            slowest = max(self.custom_metrics.items(), key=lambda x: x[1])
            print(f"Slowest test: {slowest[0]} ({slowest[1]:.3f}s)")

# Use custom result processor
result = CustomResultProcessor(args, stream)

Result Analysis and Reporting

from green.result import GreenTestResult

def analyze_results(result):
    """Analyze test results and generate detailed report."""
    
    total_tests = result.testsRun
    successes = total_tests - len(result.failures) - len(result.errors) - len(result.skipped)
    
    print(f"Test Execution Summary:")
    print(f"  Total tests: {total_tests}")
    print(f"  Successes: {successes}")
    print(f"  Failures: {len(result.failures)}")
    print(f"  Errors: {len(result.errors)}")
    print(f"  Skipped: {len(result.skipped)}")
    
    if hasattr(result, 'expectedFailures'):
        print(f"  Expected failures: {len(result.expectedFailures)}")
    if hasattr(result, 'unexpectedSuccesses'):
        print(f"  Unexpected successes: {len(result.unexpectedSuccesses)}")
    
    # Detailed failure analysis
    if result.failures:
        print(f"\nFailure Details:")
        for test, failure in result.failures:
            print(f"  {test}: {failure}")
    
    # Error analysis
    if result.errors:
        print(f"\nError Details:")
        for test, error in result.errors:
            print(f"  {test}: {error}")
    
    return result.wasSuccessful()

# Usage
success = analyze_results(result)
exit(0 if success else 1)

Integration with External Reporting

from green.result import GreenTestResult
import json

class JSONReportingResult(GreenTestResult):
    """Result processor that generates JSON reports."""
    
    def __init__(self, args, stream, json_file=None):
        super().__init__(args, stream)
        self.json_file = json_file
        self.test_data = []
    
    def addSuccess(self, test, test_time=None):
        super().addSuccess(test, test_time)
        self.test_data.append({
            'test': str(test),
            'outcome': 'success',
            'time': test_time
        })
    
    def addFailure(self, test, err, test_time=None):
        super().addFailure(test, err, test_time)
        self.test_data.append({
            'test': str(test),
            'outcome': 'failure', 
            'time': test_time,
            'error': str(err[1])
        })
    
    def stopTestRun(self):
        super().stopTestRun()
        if self.json_file:
            with open(self.json_file, 'w') as f:
                json.dump({
                    'summary': {
                        'total': self.testsRun,
                        'failures': len(self.failures),
                        'errors': len(self.errors),
                        'success': self.wasSuccessful()
                    },
                    'tests': self.test_data
                }, f, indent=2)

# Usage
result = JSONReportingResult(args, stream, 'test_results.json')

Result Attributes

Standard unittest.TestResult Attributes

  • testsRun (int): Number of tests executed
  • failures (list): List of (test, traceback) tuples for failed tests
  • errors (list): List of (test, traceback) tuples for error tests
  • skipped (list): List of (test, reason) tuples for skipped tests
  • expectedFailures (list): Tests that failed as expected
  • unexpectedSuccesses (list): Tests that passed unexpectedly

Green-Specific Attributes

  • Test timing information
  • Process execution metadata
  • Coverage integration data
  • Enhanced error formatting
  • Hierarchical test organization

Install with Tessl CLI

npx tessl i tessl/pypi-green

docs

cli.md

configuration.md

django-integration.md

index.md

junit-xml.md

output-formatting.md

setuptools-integration.md

test-execution.md

test-loading.md

test-results.md

tile.json