The Auto-Completion system provides interactive code completion functionality using JLine integration. It offers context-aware suggestions and symbol completion for enhanced REPL user experience.
The main class providing auto-completion functionality integrated with the Spark interpreter.
/**
* Auto-completion functionality for the REPL using JLine
* @param intp SparkIMain interpreter instance to provide completion context
*/
@DeveloperApi
class SparkJLineCompletion(val intp: SparkIMain) {
/**
* Completion verbosity level controlling detail of completion output
* 0 = minimal, higher values = more verbose
*/
var verbosity: Int
/**
* Reset verbosity level to zero (minimal output)
*/
def resetVerbosity(): Unit
/**
* Create a JLineTabCompletion instance for performing completions
* @return Configured JLineTabCompletion instance
*/
def completer(): JLineTabCompletion
}Usage Examples:
import org.apache.spark.repl.{SparkIMain, SparkJLineCompletion}
// Create interpreter and completion system
val interpreter = new SparkIMain()
interpreter.initializeSynchronous()
val completion = new SparkJLineCompletion(interpreter)
// Configure verbosity
completion.verbosity = 1 // More detailed completion output
completion.resetVerbosity() // Reset to minimal output
// Get completer instance
val completer = completion.completer()The completion system integrates with JLine to provide interactive completion during REPL input.
/**
* JLineTabCompletion provides the actual completion logic
* Extends ScalaCompleter from scala.tools.nsc.interpreter
*/
class JLineTabCompletion extends ScalaCompleter {
/**
* Perform completion on input buffer
* @param buffer Current input buffer
* @param cursor Cursor position in buffer
* @return Candidates object with completions and cursor position
*/
def complete(buffer: String, cursor: Int): Candidates
}
/**
* Result of completion operation from scala.tools.nsc.interpreter
*/
case class Candidates(
cursor: Int, // Position where completion applies
candidates: List[String] // Possible completions
)Usage Examples:
import org.apache.spark.repl.{SparkIMain, SparkJLineCompletion}
val interpreter = new SparkIMain()
interpreter.initializeSynchronous()
// Define some variables for completion context
interpreter.interpret("val myList = List(1, 2, 3)")
interpreter.interpret("val myString = \"hello world\"")
interpreter.interpret("import scala.util.Random")
// Create completion system
val completion = new SparkJLineCompletion(interpreter)
val completer = completion.completer()
// Perform completion (simulated - in real REPL this is automatic)
val buffer = "myLi"
val cursor = buffer.length
val result = completer.complete(buffer, cursor)
println(s"Completions for '$buffer': ${result.candidates.mkString(", ")}")Utility functions for accessing completion-related information.
/**
* Helper object for accessing Scala compiler settings
*/
@DeveloperApi
object SparkHelper {
/**
* Get explicit parent class loader from compiler settings
* @param settings Scala compiler settings
* @return Optional parent ClassLoader
*/
def explicitParentLoader(settings: Settings): Option[ClassLoader]
}Usage Example:
import org.apache.spark.repl.SparkHelper
import scala.tools.nsc.Settings
val settings = new Settings()
val parentLoader = SparkHelper.explicitParentLoader(settings)
parentLoader.foreach(loader =>
println(s"Parent classloader: ${loader.getClass.getName}")
)The completion system provides suggestions for:
The completer understands the current context and provides relevant suggestions:
// After typing "myList."
// Completer suggests: head, tail, map, filter, foreach, etc.
// After typing "import scala."
// Completer suggests: util, collection, concurrent, math, etc.
// After typing "val x: "
// Completer suggests available types: Int, String, List, etc.The system provides completion for import statements, understanding both standard library and user-defined packages.
import org.apache.spark.repl.{SparkIMain, SparkJLineCompletion}
import scala.tools.nsc.Settings
// Create interpreter with custom settings
val settings = new Settings()
val interpreter = new SparkIMain(settings)
interpreter.initializeSynchronous()
// Set up completion system
val completion = new SparkJLineCompletion(interpreter)
completion.verbosity = 2 // Verbose completion output
// Get completer for integration with input system
val completer = completion.completer()
// In a real REPL, this would be integrated with JLine ConsoleReader
// Here's a simplified example of how completion might be used:
def performCompletion(input: String, position: Int): List[String] = {
val result = completer.complete(input, position)
result.candidates
}
// Example usage
val input = "List(1,2,3).ma"
val position = input.length
val suggestions = performCompletion(input, position)
println(s"Suggestions: ${suggestions.mkString(", ")}")import org.apache.spark.repl.{SparkIMain, SparkJLineCompletion}
val interpreter = new SparkIMain()
interpreter.initializeSynchronous()
// Add custom context for completion
interpreter.interpret("""
case class Person(name: String, age: Int, email: String)
val people = List(
Person("Alice", 30, "alice@example.com"),
Person("Bob", 25, "bob@example.com")
)
import java.time.LocalDateTime
import scala.util.{Random, Try, Success, Failure}
""")
// Create completion system with enriched context
val completion = new SparkJLineCompletion(interpreter)
val completer = completion.completer()
// Now completion will include:
// - Person class and its methods
// - people variable and List methods
// - LocalDateTime methods
// - Random, Try, Success, Failure from imports
// Test completion on defined context
val testCases = List(
("people.hea", "head"),
("Person(", "constructor parameters"),
("LocalDateTime.", "LocalDateTime methods"),
("Random.", "Random methods")
)
testCases.foreach { case (input, description) =>
val result = completer.complete(input, input.length)
println(s"$description: ${result.candidates.take(5).mkString(", ")}")
}import org.apache.spark.repl.{SparkIMain, SparkJLineCompletion}
import scala.tools.nsc.Settings
// Create interpreter with advanced settings
val settings = new Settings()
settings.usejavacp.value = true // Use Java classpath
settings.embeddedDefaults[SparkIMain] // Apply REPL-specific defaults
val interpreter = new SparkIMain(settings)
interpreter.initializeSynchronous()
// Add external JARs for completion context
import java.net.URL
val sparkSqlJar = new URL("file:///path/to/spark-sql.jar")
interpreter.addUrlsToClassPath(sparkSqlJar)
// Import external libraries
interpreter.interpret("import org.apache.spark.sql.{SparkSession, DataFrame}")
interpreter.interpret("import org.apache.spark.sql.functions._")
// Set up completion with enhanced context
val completion = new SparkJLineCompletion(interpreter)
completion.verbosity = 1
val completer = completion.completer()
// Now completion includes Spark SQL classes and functions
val sparkCompletions = completer.complete("SparkSession.", 13)
println(s"SparkSession methods: ${sparkCompletions.candidates.take(10).mkString(", ")}")
val functionsCompletions = completer.complete("col(", 4)
println(s"Column functions: ${functionsCompletions.candidates.take(10).mkString(", ")}")