or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

index.mdinteractive-shell.mdsession-management.mdsignaling.md

signaling.mddocs/

0

# Signal Handling

1

2

The Signaling module provides interrupt and job cancellation functionality for graceful handling of Ctrl+C and job termination in interactive Spark REPL sessions.

3

4

## Capabilities

5

6

### Interrupt Handling

7

8

Registers a SIGINT (Ctrl+C) handler that provides intelligent job cancellation behavior.

9

10

```scala { .api }

11

object Signaling extends Logging {

12

def cancelOnInterrupt(): Unit

13

}

14

```

15

16

## Functionality

17

18

### Automatic Setup

19

20

The signal handler is automatically registered when the REPL starts:

21

22

```scala

23

// Automatically called in Main object initialization

24

object Main extends Logging {

25

initializeLogIfNecessary(true)

26

Signaling.cancelOnInterrupt() // Signal handler setup

27

// ... rest of initialization

28

}

29

```

30

31

### Interrupt Behavior

32

33

The signal handler provides intelligent behavior based on the current state of the Spark application:

34

35

```scala

36

// Behavior when Ctrl+C is pressed:

37

SignalUtils.register("INT") {

38

SparkContext.getActive.map { ctx =>

39

if (!ctx.statusTracker.getActiveJobIds().isEmpty) {

40

// Active jobs are running - cancel them first

41

logWarning("Cancelling all active jobs, this can take a while. " +

42

"Press Ctrl+C again to exit now.")

43

ctx.cancelAllJobs()

44

true // Signal handled - don't exit yet

45

} else {

46

// No active jobs - allow normal exit

47

false // Let default handler exit the process

48

}

49

}.getOrElse(false) // No active SparkContext - allow normal exit

50

}

51

```

52

53

## Usage Patterns

54

55

### Interactive Job Cancellation

56

57

When working interactively, users can safely interrupt long-running operations:

58

59

```scala

60

// In REPL session:

61

scala> val rdd = sc.parallelize(1 to 10000000)

62

scala> val result = rdd.map(expensiveOperation).collect()

63

// User presses Ctrl+C during execution

64

// Output: "Cancelling all active jobs, this can take a while. Press Ctrl+C again to exit now."

65

// Jobs are cancelled gracefully

66

```

67

68

### Two-Stage Exit Process

69

70

The signal handler implements a two-stage exit process:

71

72

1. **First Ctrl+C**: Cancel active Spark jobs if any are running

73

2. **Second Ctrl+C**: Terminate the REPL process immediately

74

75

```scala

76

// First Ctrl+C while jobs are running:

77

// - Logs warning message

78

// - Calls ctx.cancelAllJobs()

79

// - Returns true (signal handled, don't exit)

80

81

// Second Ctrl+C or first Ctrl+C with no active jobs:

82

// - Returns false (allow default exit behavior)

83

// - REPL process terminates

84

```

85

86

## Implementation Details

87

88

### Signal Registration

89

90

Uses Spark's `SignalUtils` for cross-platform signal handling:

91

92

```scala

93

import org.apache.spark.util.SignalUtils

94

95

// Register handler for SIGINT (interrupt signal)

96

SignalUtils.register("INT") { /* handler logic */ }

97

```

98

99

### Active Job Detection

100

101

Checks for active jobs using the SparkContext's status tracker:

102

103

```scala

104

SparkContext.getActive.map { ctx =>

105

// Check if any jobs are currently running

106

val activeJobs = ctx.statusTracker.getActiveJobIds()

107

!activeJobs.isEmpty

108

}

109

```

110

111

### Logging Integration

112

113

Integrates with Spark's logging system for user feedback:

114

115

```scala

116

object Signaling extends Logging {

117

// Uses logWarning for user-visible messages

118

logWarning("Cancelling all active jobs, this can take a while. " +

119

"Press Ctrl+C again to exit now.")

120

}

121

```

122

123

## Error Handling

124

125

### SparkContext Availability

126

127

Handles cases where no SparkContext is available:

128

129

```scala

130

SparkContext.getActive.map { ctx =>

131

// SparkContext exists - check for active jobs

132

// ... job cancellation logic

133

}.getOrElse(false) // No SparkContext - allow normal exit

134

```

135

136

### Graceful Degradation

137

138

If job cancellation fails or SparkContext is in an invalid state, the handler gracefully falls back to allowing normal process termination.

139

140

## Platform Considerations

141

142

### Cross-Platform Support

143

144

The signal handling works across different operating systems through Spark's `SignalUtils` abstraction, which handles platform-specific signal differences.

145

146

### Thread Safety

147

148

The signal handler is designed to be thread-safe and can be called from signal handling threads without interfering with the main REPL execution thread.

149

150

## Testing Considerations

151

152

### Signal Handler Testing

153

154

The signal handler behavior can be tested programmatically:

155

156

```scala

157

// In test code:

158

import org.apache.spark.repl.Signaling

159

160

// Setup test SparkContext with active jobs

161

val sc = new SparkContext(...)

162

val rdd = sc.parallelize(1 to 1000000)

163

val future = Future { rdd.map(expensiveOperation).collect() }

164

165

// Signal handler should cancel jobs when interrupted

166

// Test framework would need to simulate SIGINT

167

```

168

169

### Mock Testing

170

171

For unit testing, the signal handling logic can be isolated and tested with mock SparkContext instances to verify correct behavior under different conditions.