or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

class-loading.mdindex.mdinteractive-shell.mdmain-api.md

interactive-shell.mddocs/

0

# Interactive Shell

1

2

Interactive shell implementation with Spark-specific features, command processing, and initialization logic.

3

4

## Capabilities

5

6

### SparkILoop Class

7

8

The `SparkILoop` class extends Scala's standard REPL with Spark-specific functionality and automatic SparkContext/SparkSession initialization.

9

10

```scala { .api }

11

/**

12

* Spark-specific interactive shell extending Scala's ILoop

13

* @param in0 Optional input reader for non-interactive usage

14

* @param out Output writer for REPL responses

15

*/

16

class SparkILoop(in0: Option[BufferedReader], out: JPrintWriter) extends ILoop

17

18

/**

19

* Constructor with required input stream

20

* @param in0 Input reader for REPL commands

21

* @param out Output writer for REPL responses

22

*/

23

def this(in0: BufferedReader, out: JPrintWriter)

24

25

/**

26

* Default constructor for interactive shell usage

27

* Uses Console.out for output and chooses appropriate input reader

28

*/

29

def this()

30

```

31

32

**Usage Examples:**

33

34

```scala

35

import org.apache.spark.repl.SparkILoop

36

import scala.tools.nsc.interpreter.JPrintWriter

37

import java.io.{BufferedReader, StringReader, PrintWriter}

38

39

// Interactive REPL (default)

40

val repl = new SparkILoop()

41

42

// REPL with custom input/output

43

val input = new BufferedReader(new StringReader("val x = 1\nx * 2\n"))

44

val output = new JPrintWriter(new PrintWriter(System.out), true)

45

val customRepl = new SparkILoop(input, output)

46

47

// REPL with optional input

48

val optionalInputRepl = new SparkILoop(Some(input), output)

49

```

50

51

### Spark Initialization

52

53

Spark-specific initialization logic that sets up SparkContext and SparkSession with appropriate imports and context variables.

54

55

```scala { .api }

56

/**

57

* Initialize Spark context and session in the REPL environment

58

* Executes initialization commands to set up 'spark' and 'sc' variables

59

* Adds common imports for Spark functionality

60

*/

61

def initializeSpark(): Unit

62

63

/**

64

* Initialization commands executed when REPL starts

65

* Includes SparkSession creation, SparkContext setup, and common imports

66

*/

67

val initializationCommands: Seq[String]

68

```

69

70

**Initialization Commands:**

71

72

The REPL automatically executes these commands on startup:

73

74

```scala

75

// SparkSession setup

76

@transient val spark = if (org.apache.spark.repl.Main.sparkSession != null) {

77

org.apache.spark.repl.Main.sparkSession

78

} else {

79

org.apache.spark.repl.Main.createSparkSession()

80

}

81

82

// SparkContext setup with UI information

83

@transient val sc = {

84

val _sc = spark.sparkContext

85

// Display Web UI URL information

86

_sc

87

}

88

89

// Common imports

90

import org.apache.spark.SparkContext._

91

import spark.implicits._

92

import spark.sql

93

import org.apache.spark.sql.functions._

94

```

95

96

**Usage Examples:**

97

98

```scala

99

import org.apache.spark.repl.SparkILoop

100

101

val repl = new SparkILoop()

102

// After process() is called, these variables are automatically available:

103

// - spark: SparkSession instance

104

// - sc: SparkContext instance

105

// - All common Spark imports

106

```

107

108

### REPL Interface Override

109

110

Customized REPL interface methods providing Spark-specific welcome messages and command handling.

111

112

```scala { .api }

113

/**

114

* Display Spark-branded welcome message with version information

115

* Overrides the standard Scala REPL welcome message

116

*/

117

def printWelcome(): Unit

118

119

/**

120

* Handle reset command with Spark-specific behavior

121

* Preserves SparkSession and SparkContext state after reset

122

* @param line The reset command line

123

*/

124

def resetCommand(line: String): Unit

125

126

/**

127

* Handle replay command with Spark initialization

128

* Re-initializes Spark context after replaying command history

129

*/

130

def replay(): Unit

131

132

/**

133

* Available REPL commands inherited from standard ILoop

134

* Includes standard Scala REPL commands plus any Spark-specific additions

135

*/

136

val commands: List[LoopCommand]

137

```

138

139

**Usage Examples:**

140

141

```scala

142

// Welcome message display

143

Welcome to

144

____ __

145

/ __/__ ___ _____/ /__

146

_\ \/ _ \/ _ `/ __/ '_/

147

/___/ .__/\_,_/_/ /_/\_\ version 2.4.8

148

/_/

149

150

Using Scala 2.11.12 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_271)

151

Type in expressions to have them evaluated.

152

Type :help for more information.

153

154

// Reset behavior

155

:reset

156

Resetting interpreter state.

157

Note that after :reset, state of SparkSession and SparkContext is unchanged.

158

```

159

160

### Interpreter Creation

161

162

Custom interpreter creation logic with Scala version-specific workarounds.

163

164

```scala { .api }

165

/**

166

* Create interpreter instance with Scala version-specific handling

167

* Uses SparkILoopInterpreter for Scala 2.11 to work around compiler bugs

168

* Falls back to standard interpreter for other Scala versions

169

*/

170

def createInterpreter(): Unit

171

```

172

173

**Implementation Details:**

174

175

- **Scala 2.11**: Uses `SparkILoopInterpreter` class to fix import handling bugs

176

- **Other Versions**: Uses standard Scala interpreter

177

- Automatically detects Scala version and chooses appropriate interpreter

178

179

### REPL Processing

180

181

Main processing loop with Spark-specific initialization ordering and error handling.

182

183

```scala { .api }

184

/**

185

* Process REPL with given settings

186

* Handles initialization, welcome message display, and command processing

187

* @param settings Scala compiler settings for the REPL session

188

* @return true if processing completed successfully, false on error

189

*/

190

def process(settings: Settings): Boolean

191

```

192

193

**Processing Flow:**

194

195

1. Create input reader (interactive or from provided stream)

196

2. Set up splash screen for parallel initialization

197

3. Create and initialize interpreter

198

4. Initialize Spark context and session

199

5. Display welcome message

200

6. Start command processing loop

201

7. Handle cleanup on exit

202

203

**Usage Examples:**

204

205

```scala

206

import org.apache.spark.repl.SparkILoop

207

import scala.tools.nsc.Settings

208

209

val settings = new Settings()

210

settings.usejavacp.value = true

211

212

val repl = new SparkILoop()

213

val success = repl.process(settings)

214

215

if (success) {

216

println("REPL completed successfully")

217

} else {

218

println("REPL encountered errors")

219

}

220

```

221

222

## Companion Object Utilities

223

224

Static utility methods for running code in REPL instances programmatically.

225

226

```scala { .api }

227

object SparkILoop {

228

/**

229

* Run code in a REPL instance and return the output as a string

230

* @param code Scala code to execute

231

* @param sets Optional compiler settings (defaults to new Settings)

232

* @return String output from REPL execution

233

*/

234

def run(code: String, sets: Settings = new Settings): String

235

236

/**

237

* Run multiple lines of code in a REPL instance

238

* @param lines List of code lines to execute

239

* @return String output from REPL execution

240

*/

241

def run(lines: List[String]): String

242

}

243

```

244

245

**Usage Examples:**

246

247

```scala

248

import org.apache.spark.repl.SparkILoop

249

250

// Run single code string

251

val result1 = SparkILoop.run("""

252

val data = spark.range(100)

253

data.count()

254

""")

255

256

// Run multiple lines

257

val lines = List(

258

"val numbers = spark.range(1, 11)",

259

"val squares = numbers.map(x => x * x)",

260

"squares.collect().mkString(\", \")"

261

)

262

val result2 = SparkILoop.run(lines)

263

264

// Run with custom settings

265

import scala.tools.nsc.Settings

266

val customSettings = new Settings()

267

customSettings.classpath.value = "/path/to/additional/jars"

268

val result3 = SparkILoop.run("import custom.library._; useCustomFunction()", customSettings)

269

```

270

271

## Error Handling

272

273

The SparkILoop includes comprehensive error handling for various scenarios:

274

275

- **Interpreter Initialization Errors**: Displayed and processing stops

276

- **Spark Initialization Failures**: Runtime exceptions with detailed messages

277

- **Command Processing Errors**: Handled by underlying Scala REPL infrastructure

278

- **Context Class Loader Issues**: Scala 2.11-specific workarounds to prevent classloader corruption

279

280

## Thread Safety and Context

281

282

- SparkILoop instances are single-threaded and not thread-safe

283

- Each instance maintains its own interpreter state

284

- Global SparkContext/SparkSession are shared across REPL sessions

285

- Context class loader handling prevents issues in Scala 2.11 environments