or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

auto-completion.mdcode-interpreter.mddistributed-class-loading.mdindex.mdinteractive-shell.md

auto-completion.mddocs/

0

# Auto-Completion

1

2

The Auto-Completion system provides interactive code completion functionality using JLine integration. It offers context-aware suggestions and symbol completion for enhanced REPL user experience.

3

4

## Capabilities

5

6

### SparkJLineCompletion Class

7

8

The main class providing auto-completion functionality integrated with the Spark interpreter.

9

10

```scala { .api }

11

/**

12

* Auto-completion functionality for the REPL using JLine

13

* @param intp SparkIMain interpreter instance to provide completion context

14

*/

15

@DeveloperApi

16

class SparkJLineCompletion(val intp: SparkIMain) {

17

/**

18

* Completion verbosity level controlling detail of completion output

19

* 0 = minimal, higher values = more verbose

20

*/

21

var verbosity: Int

22

23

/**

24

* Reset verbosity level to zero (minimal output)

25

*/

26

def resetVerbosity(): Unit

27

28

/**

29

* Create a JLineTabCompletion instance for performing completions

30

* @return Configured JLineTabCompletion instance

31

*/

32

def completer(): JLineTabCompletion

33

}

34

```

35

36

**Usage Examples:**

37

38

```scala

39

import org.apache.spark.repl.{SparkIMain, SparkJLineCompletion}

40

41

// Create interpreter and completion system

42

val interpreter = new SparkIMain()

43

interpreter.initializeSynchronous()

44

45

val completion = new SparkJLineCompletion(interpreter)

46

47

// Configure verbosity

48

completion.verbosity = 1 // More detailed completion output

49

completion.resetVerbosity() // Reset to minimal output

50

51

// Get completer instance

52

val completer = completion.completer()

53

```

54

55

### Completion Integration

56

57

The completion system integrates with JLine to provide interactive completion during REPL input.

58

59

```scala { .api }

60

/**

61

* JLineTabCompletion provides the actual completion logic

62

* Extends ScalaCompleter from scala.tools.nsc.interpreter

63

*/

64

class JLineTabCompletion extends ScalaCompleter {

65

/**

66

* Perform completion on input buffer

67

* @param buffer Current input buffer

68

* @param cursor Cursor position in buffer

69

* @return Candidates object with completions and cursor position

70

*/

71

def complete(buffer: String, cursor: Int): Candidates

72

}

73

74

/**

75

* Result of completion operation from scala.tools.nsc.interpreter

76

*/

77

case class Candidates(

78

cursor: Int, // Position where completion applies

79

candidates: List[String] // Possible completions

80

)

81

```

82

83

**Usage Examples:**

84

85

```scala

86

import org.apache.spark.repl.{SparkIMain, SparkJLineCompletion}

87

88

val interpreter = new SparkIMain()

89

interpreter.initializeSynchronous()

90

91

// Define some variables for completion context

92

interpreter.interpret("val myList = List(1, 2, 3)")

93

interpreter.interpret("val myString = \"hello world\"")

94

interpreter.interpret("import scala.util.Random")

95

96

// Create completion system

97

val completion = new SparkJLineCompletion(interpreter)

98

val completer = completion.completer()

99

100

// Perform completion (simulated - in real REPL this is automatic)

101

val buffer = "myLi"

102

val cursor = buffer.length

103

val result = completer.complete(buffer, cursor)

104

105

println(s"Completions for '$buffer': ${result.candidates.mkString(", ")}")

106

```

107

108

### Helper Utilities

109

110

Utility functions for accessing completion-related information.

111

112

```scala { .api }

113

/**

114

* Helper object for accessing Scala compiler settings

115

*/

116

@DeveloperApi

117

object SparkHelper {

118

/**

119

* Get explicit parent class loader from compiler settings

120

* @param settings Scala compiler settings

121

* @return Optional parent ClassLoader

122

*/

123

def explicitParentLoader(settings: Settings): Option[ClassLoader]

124

}

125

```

126

127

**Usage Example:**

128

129

```scala

130

import org.apache.spark.repl.SparkHelper

131

import scala.tools.nsc.Settings

132

133

val settings = new Settings()

134

val parentLoader = SparkHelper.explicitParentLoader(settings)

135

parentLoader.foreach(loader =>

136

println(s"Parent classloader: ${loader.getClass.getName}")

137

)

138

```

139

140

## Completion Features

141

142

### Symbol Completion

143

144

The completion system provides suggestions for:

145

146

- **Variable names**: Completion of defined variables and values

147

- **Method names**: Completion of available methods on objects

148

- **Type names**: Completion of class and trait names

149

- **Package names**: Completion of imported and available packages

150

- **Keywords**: Scala language keywords and constructs

151

152

### Context-Aware Suggestions

153

154

The completer understands the current context and provides relevant suggestions:

155

156

```scala

157

// After typing "myList."

158

// Completer suggests: head, tail, map, filter, foreach, etc.

159

160

// After typing "import scala."

161

// Completer suggests: util, collection, concurrent, math, etc.

162

163

// After typing "val x: "

164

// Completer suggests available types: Int, String, List, etc.

165

```

166

167

### Import Completion

168

169

The system provides completion for import statements, understanding both standard library and user-defined packages.

170

171

## Integration Patterns

172

173

### Setting Up Completion in Custom REPL

174

175

```scala

176

import org.apache.spark.repl.{SparkIMain, SparkJLineCompletion}

177

import scala.tools.nsc.Settings

178

179

// Create interpreter with custom settings

180

val settings = new Settings()

181

val interpreter = new SparkIMain(settings)

182

interpreter.initializeSynchronous()

183

184

// Set up completion system

185

val completion = new SparkJLineCompletion(interpreter)

186

completion.verbosity = 2 // Verbose completion output

187

188

// Get completer for integration with input system

189

val completer = completion.completer()

190

191

// In a real REPL, this would be integrated with JLine ConsoleReader

192

// Here's a simplified example of how completion might be used:

193

def performCompletion(input: String, position: Int): List[String] = {

194

val result = completer.complete(input, position)

195

result.candidates

196

}

197

198

// Example usage

199

val input = "List(1,2,3).ma"

200

val position = input.length

201

val suggestions = performCompletion(input, position)

202

println(s"Suggestions: ${suggestions.mkString(", ")}")

203

```

204

205

### Completion with Custom Context

206

207

```scala

208

import org.apache.spark.repl.{SparkIMain, SparkJLineCompletion}

209

210

val interpreter = new SparkIMain()

211

interpreter.initializeSynchronous()

212

213

// Add custom context for completion

214

interpreter.interpret("""

215

case class Person(name: String, age: Int, email: String)

216

val people = List(

217

Person("Alice", 30, "alice@example.com"),

218

Person("Bob", 25, "bob@example.com")

219

)

220

import java.time.LocalDateTime

221

import scala.util.{Random, Try, Success, Failure}

222

""")

223

224

// Create completion system with enriched context

225

val completion = new SparkJLineCompletion(interpreter)

226

val completer = completion.completer()

227

228

// Now completion will include:

229

// - Person class and its methods

230

// - people variable and List methods

231

// - LocalDateTime methods

232

// - Random, Try, Success, Failure from imports

233

234

// Test completion on defined context

235

val testCases = List(

236

("people.hea", "head"),

237

("Person(", "constructor parameters"),

238

("LocalDateTime.", "LocalDateTime methods"),

239

("Random.", "Random methods")

240

)

241

242

testCases.foreach { case (input, description) =>

243

val result = completer.complete(input, input.length)

244

println(s"$description: ${result.candidates.take(5).mkString(", ")}")

245

}

246

```

247

248

### Advanced Completion Configuration

249

250

```scala

251

import org.apache.spark.repl.{SparkIMain, SparkJLineCompletion}

252

import scala.tools.nsc.Settings

253

254

// Create interpreter with advanced settings

255

val settings = new Settings()

256

settings.usejavacp.value = true // Use Java classpath

257

settings.embeddedDefaults[SparkIMain] // Apply REPL-specific defaults

258

259

val interpreter = new SparkIMain(settings)

260

interpreter.initializeSynchronous()

261

262

// Add external JARs for completion context

263

import java.net.URL

264

val sparkSqlJar = new URL("file:///path/to/spark-sql.jar")

265

interpreter.addUrlsToClassPath(sparkSqlJar)

266

267

// Import external libraries

268

interpreter.interpret("import org.apache.spark.sql.{SparkSession, DataFrame}")

269

interpreter.interpret("import org.apache.spark.sql.functions._")

270

271

// Set up completion with enhanced context

272

val completion = new SparkJLineCompletion(interpreter)

273

completion.verbosity = 1

274

275

val completer = completion.completer()

276

277

// Now completion includes Spark SQL classes and functions

278

val sparkCompletions = completer.complete("SparkSession.", 13)

279

println(s"SparkSession methods: ${sparkCompletions.candidates.take(10).mkString(", ")}")

280

281

val functionsCompletions = completer.complete("col(", 4)

282

println(s"Column functions: ${functionsCompletions.candidates.take(10).mkString(", ")}")

283

```