or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

cli.mdconfiguration.mddjango-integration.mdindex.mdjunit-xml.mdoutput-formatting.mdsetuptools-integration.mdtest-execution.mdtest-loading.mdtest-results.md

test-results.mddocs/

0

# Test Results and Reporting

1

2

Green's comprehensive test result system provides detailed result collection, aggregation from multiple processes, and support for various output formats with rich metadata and timing information.

3

4

## Capabilities

5

6

### Base Test Result

7

8

Base class providing common functionality for both ProtoTestResult and GreenTestResult, handling stdout/stderr capture and display.

9

10

```python { .api }

11

class BaseTestResult:

12

"""

13

Base class inherited by ProtoTestResult and GreenTestResult.

14

15

Provides common functionality for capturing and displaying test output,

16

managing stdout/stderr streams, and handling duration collection (Python 3.12+).

17

"""

18

19

def __init__(self, stream: GreenStream | None, *, colors: Colors | None = None):

20

"""

21

Initialize base test result.

22

23

Args:

24

stream: Output stream for result display

25

colors: Color formatting instance

26

"""

27

28

def recordStdout(self, test: RunnableTestT, output):

29

"""

30

Record stdout output captured from a test.

31

32

Args:

33

test: The test case that produced output

34

output: Captured stdout content

35

"""

36

37

def recordStderr(self, test: RunnableTestT, errput):

38

"""

39

Record stderr output captured from a test.

40

41

Args:

42

test: The test case that produced errput

43

errput: Captured stderr content

44

"""

45

46

def displayStdout(self, test: TestCaseT):

47

"""

48

Display and remove captured stdout for a specific test.

49

50

Args:

51

test: Test case to display output for

52

"""

53

54

def displayStderr(self, test: TestCaseT):

55

"""

56

Display and remove captured stderr for a specific test.

57

58

Args:

59

test: Test case to display errput for

60

"""

61

62

def addDuration(self, test: TestCaseT, elapsed: float):

63

"""

64

Record test execution duration (New in Python 3.12).

65

66

Args:

67

test: The test case that finished

68

elapsed: Execution time in seconds including cleanup

69

"""

70

```

71

72

### Green Test Result

73

74

Main test result aggregator that collects and consolidates results from parallel test execution processes.

75

76

```python { .api }

77

class GreenTestResult:

78

"""

79

Main test result aggregator for Green's parallel execution.

80

81

Collects results from multiple worker processes and provides

82

comprehensive test outcome reporting with timing and metadata.

83

"""

84

85

def __init__(self, args, stream):

86

"""

87

Initialize test result collector.

88

89

Args:

90

args: Configuration namespace with execution parameters

91

stream: Output stream for real-time result reporting

92

"""

93

94

def addProtoTestResult(self, proto_test_result):

95

"""

96

Add results from a worker process.

97

98

Args:

99

proto_test_result (ProtoTestResult): Results from subprocess execution

100

101

This method aggregates results from parallel worker processes,

102

combining test outcomes, timing data, and error information.

103

104

Example:

105

# Used internally by Green's parallel execution system

106

main_result = GreenTestResult(args, stream)

107

# Worker processes send their results back

108

main_result.addProtoTestResult(worker_result)

109

"""

110

111

def wasSuccessful(self):

112

"""

113

Check if all tests passed without failures or errors.

114

115

Returns:

116

bool: True if no failures, errors, or unexpected successes occurred

117

118

Example:

119

if result.wasSuccessful():

120

print("All tests passed!")

121

exit(0)

122

else:

123

print("Some tests failed")

124

exit(1)

125

"""

126

127

def startTestRun(self):

128

"""

129

Called before any tests are executed.

130

131

Initializes result collection and prepares output formatting.

132

"""

133

134

def stopTestRun(self):

135

"""

136

Called after all tests have been executed.

137

138

Finalizes result collection and produces summary output.

139

"""

140

141

def startTest(self, test):

142

"""

143

Called when an individual test is started.

144

145

Args:

146

test: Test case being started

147

"""

148

149

def stopTest(self, test):

150

"""

151

Called when an individual test is completed.

152

153

Args:

154

test: Test case that was completed

155

"""

156

157

def addSuccess(self, test, test_time=None):

158

"""

159

Record a successful test.

160

161

Args:

162

test: Test case that passed

163

test_time (float, optional): Execution time in seconds

164

"""

165

166

def addError(self, test, err, test_time=None):

167

"""

168

Record a test error (exception during test execution).

169

170

Args:

171

test: Test case that had an error

172

err: Exception information (exc_type, exc_value, exc_traceback)

173

test_time (float, optional): Execution time in seconds

174

"""

175

176

def addFailure(self, test, err, test_time=None):

177

"""

178

Record a test failure (assertion failure).

179

180

Args:

181

test: Test case that failed

182

err: Failure information (exc_type, exc_value, exc_traceback)

183

test_time (float, optional): Execution time in seconds

184

"""

185

186

def addSkip(self, test, reason, test_time=None):

187

"""

188

Record a skipped test.

189

190

Args:

191

test: Test case that was skipped

192

reason (str): Reason for skipping

193

test_time (float, optional): Execution time in seconds

194

"""

195

196

def addExpectedFailure(self, test, err, test_time=None):

197

"""

198

Record an expected failure (test marked with @expectedFailure).

199

200

Args:

201

test: Test case with expected failure

202

err: Failure information

203

test_time (float, optional): Execution time in seconds

204

"""

205

206

def addUnexpectedSuccess(self, test, test_time=None):

207

"""

208

Record an unexpected success (expected failure that passed).

209

210

Args:

211

test: Test case that unexpectedly passed

212

test_time (float, optional): Execution time in seconds

213

"""

214

```

215

216

### Proto Test Result

217

218

Result container for individual subprocess test execution, designed for serialization and inter-process communication.

219

220

```python { .api }

221

class ProtoTestResult:

222

"""

223

Test result container for individual subprocess runs.

224

225

Serializable result object that can be passed between processes

226

to aggregate parallel test execution results.

227

"""

228

229

def __init__(self, start_callback=None, finalize_callback=None):

230

"""

231

Initialize subprocess result container.

232

233

Args:

234

start_callback (callable, optional): Function called when starting

235

finalize_callback (callable, optional): Function called when finalizing

236

"""

237

238

def reinitialize(self):

239

"""

240

Reset result state for reuse.

241

242

Clears all collected results while preserving configuration.

243

Used when reusing result objects across multiple test runs.

244

"""

245

246

def finalize(self):

247

"""

248

Finalize result collection.

249

250

Completes result processing and prepares for serialization

251

back to the main process.

252

"""

253

254

# Same test result methods as GreenTestResult:

255

# addSuccess, addError, addFailure, addSkip,

256

# addExpectedFailure, addUnexpectedSuccess

257

```

258

259

### Proto Test

260

261

Serializable representation of a test case for multiprocess execution.

262

263

```python { .api }

264

class ProtoTest:

265

"""

266

Serializable test representation for multiprocessing.

267

268

Lightweight test representation that can be passed between processes

269

while preserving test identity and metadata.

270

"""

271

272

def __init__(self, test=None):

273

"""

274

Create ProtoTest from a test case or doctest.

275

276

Args:

277

test: unittest.TestCase, doctest.DocTestCase, or similar test object

278

279

Example:

280

import unittest

281

282

class MyTest(unittest.TestCase):

283

def test_example(self): pass

284

285

test_instance = MyTest('test_example')

286

proto = ProtoTest(test_instance)

287

print(proto.dotted_name) # 'MyTest.test_example'

288

"""

289

290

@property

291

def dotted_name(self):

292

"""

293

Full dotted name of the test.

294

295

Returns:

296

str: Fully qualified test name like 'module.TestClass.test_method'

297

"""

298

299

@property

300

def module(self):

301

"""

302

Module name containing the test.

303

304

Returns:

305

str: Module name like 'tests.test_auth'

306

"""

307

308

@property

309

def class_name(self):

310

"""

311

Test class name.

312

313

Returns:

314

str: Class name like 'AuthenticationTest'

315

"""

316

317

@property

318

def method_name(self):

319

"""

320

Test method name.

321

322

Returns:

323

str: Method name like 'test_login_success'

324

"""

325

326

def getDescription(self, verbose):

327

"""

328

Get formatted test description for output.

329

330

Args:

331

verbose (int): Verbosity level (0-4)

332

333

Returns:

334

str: Formatted description, may include docstring at higher verbosity

335

336

Example:

337

proto = ProtoTest(test)

338

desc = proto.getDescription(verbose=2)

339

# Returns method docstring if available, otherwise method name

340

"""

341

```

342

343

### Proto Error

344

345

Serializable error representation for multiprocess communication.

346

347

```python { .api }

348

class ProtoError:

349

"""

350

Serializable error representation for multiprocessing.

351

352

Captures exception information in a form that can be passed

353

between processes while preserving traceback details.

354

"""

355

356

def __init__(self, err):

357

"""

358

Create ProtoError from exception information.

359

360

Args:

361

err: Exception info tuple (exc_type, exc_value, exc_traceback)

362

or similar error information

363

364

Example:

365

try:

366

# Some test code that fails

367

assert False, "Test failure"

368

except:

369

import sys

370

proto_err = ProtoError(sys.exc_info())

371

print(str(proto_err)) # Formatted traceback

372

"""

373

374

def __str__(self):

375

"""

376

Get formatted traceback string.

377

378

Returns:

379

str: Human-readable traceback with file locations and error details

380

"""

381

```

382

383

## Helper Functions

384

385

```python { .api }

386

def proto_test(test):

387

"""

388

Convert a test case to ProtoTest.

389

390

Args:

391

test: unittest.TestCase or similar test object

392

393

Returns:

394

ProtoTest: Serializable test representation

395

"""

396

397

def proto_error(err):

398

"""

399

Convert error information to ProtoError.

400

401

Args:

402

err: Exception info tuple or error object

403

404

Returns:

405

ProtoError: Serializable error representation

406

"""

407

```

408

409

## Usage Examples

410

411

### Basic Result Usage

412

413

```python

414

from green.result import GreenTestResult, ProtoTestResult

415

from green.output import GreenStream

416

from green.config import get_default_args

417

import sys

418

419

# Create main result collector

420

args = get_default_args()

421

stream = GreenStream(sys.stdout)

422

main_result = GreenTestResult(args, stream)

423

424

# Simulate adding results from worker processes

425

worker_result = ProtoTestResult()

426

# Worker would populate this with test outcomes...

427

main_result.addProtoTestResult(worker_result)

428

429

# Check final results

430

print(f"Tests run: {main_result.testsRun}")

431

print(f"Failures: {len(main_result.failures)}")

432

print(f"Errors: {len(main_result.errors)}")

433

print(f"Success: {main_result.wasSuccessful()}")

434

```

435

436

### Working with Proto Objects

437

438

```python

439

from green.result import ProtoTest, ProtoError, proto_test, proto_error

440

import unittest

441

import sys

442

443

# Create a test case

444

class SampleTest(unittest.TestCase):

445

def test_example(self):

446

"""This is an example test."""

447

self.assertEqual(1, 1)

448

449

# Convert to ProtoTest for multiprocessing

450

test_instance = SampleTest('test_example')

451

proto = proto_test(test_instance)

452

453

print(f"Test name: {proto.dotted_name}")

454

print(f"Module: {proto.module}")

455

print(f"Class: {proto.class_name}")

456

print(f"Method: {proto.method_name}")

457

print(f"Description: {proto.getDescription(verbose=2)}")

458

459

# Example error handling

460

try:

461

raise ValueError("Example error")

462

except:

463

error_info = sys.exc_info()

464

proto_err = proto_error(error_info)

465

print(f"Error: {proto_err}")

466

```

467

468

### Custom Result Processing

469

470

```python

471

from green.result import GreenTestResult

472

473

class CustomResultProcessor(GreenTestResult):

474

"""Custom result processor with additional reporting."""

475

476

def __init__(self, args, stream):

477

super().__init__(args, stream)

478

self.custom_metrics = {}

479

480

def addSuccess(self, test, test_time=None):

481

super().addSuccess(test, test_time)

482

# Custom success processing

483

if test_time:

484

self.custom_metrics[test] = test_time

485

486

def stopTestRun(self):

487

super().stopTestRun()

488

# Custom reporting

489

if self.custom_metrics:

490

slowest = max(self.custom_metrics.items(), key=lambda x: x[1])

491

print(f"Slowest test: {slowest[0]} ({slowest[1]:.3f}s)")

492

493

# Use custom result processor

494

result = CustomResultProcessor(args, stream)

495

```

496

497

### Result Analysis and Reporting

498

499

```python

500

from green.result import GreenTestResult

501

502

def analyze_results(result):

503

"""Analyze test results and generate detailed report."""

504

505

total_tests = result.testsRun

506

successes = total_tests - len(result.failures) - len(result.errors) - len(result.skipped)

507

508

print(f"Test Execution Summary:")

509

print(f" Total tests: {total_tests}")

510

print(f" Successes: {successes}")

511

print(f" Failures: {len(result.failures)}")

512

print(f" Errors: {len(result.errors)}")

513

print(f" Skipped: {len(result.skipped)}")

514

515

if hasattr(result, 'expectedFailures'):

516

print(f" Expected failures: {len(result.expectedFailures)}")

517

if hasattr(result, 'unexpectedSuccesses'):

518

print(f" Unexpected successes: {len(result.unexpectedSuccesses)}")

519

520

# Detailed failure analysis

521

if result.failures:

522

print(f"\nFailure Details:")

523

for test, failure in result.failures:

524

print(f" {test}: {failure}")

525

526

# Error analysis

527

if result.errors:

528

print(f"\nError Details:")

529

for test, error in result.errors:

530

print(f" {test}: {error}")

531

532

return result.wasSuccessful()

533

534

# Usage

535

success = analyze_results(result)

536

exit(0 if success else 1)

537

```

538

539

### Integration with External Reporting

540

541

```python

542

from green.result import GreenTestResult

543

import json

544

545

class JSONReportingResult(GreenTestResult):

546

"""Result processor that generates JSON reports."""

547

548

def __init__(self, args, stream, json_file=None):

549

super().__init__(args, stream)

550

self.json_file = json_file

551

self.test_data = []

552

553

def addSuccess(self, test, test_time=None):

554

super().addSuccess(test, test_time)

555

self.test_data.append({

556

'test': str(test),

557

'outcome': 'success',

558

'time': test_time

559

})

560

561

def addFailure(self, test, err, test_time=None):

562

super().addFailure(test, err, test_time)

563

self.test_data.append({

564

'test': str(test),

565

'outcome': 'failure',

566

'time': test_time,

567

'error': str(err[1])

568

})

569

570

def stopTestRun(self):

571

super().stopTestRun()

572

if self.json_file:

573

with open(self.json_file, 'w') as f:

574

json.dump({

575

'summary': {

576

'total': self.testsRun,

577

'failures': len(self.failures),

578

'errors': len(self.errors),

579

'success': self.wasSuccessful()

580

},

581

'tests': self.test_data

582

}, f, indent=2)

583

584

# Usage

585

result = JSONReportingResult(args, stream, 'test_results.json')

586

```

587

588

## Result Attributes

589

590

### Standard unittest.TestResult Attributes

591

- `testsRun` (int): Number of tests executed

592

- `failures` (list): List of (test, traceback) tuples for failed tests

593

- `errors` (list): List of (test, traceback) tuples for error tests

594

- `skipped` (list): List of (test, reason) tuples for skipped tests

595

- `expectedFailures` (list): Tests that failed as expected

596

- `unexpectedSuccesses` (list): Tests that passed unexpectedly

597

598

### Green-Specific Attributes

599

- Test timing information

600

- Process execution metadata

601

- Coverage integration data

602

- Enhanced error formatting

603

- Hierarchical test organization