Interactive Scala shell for Apache Spark with distributed computing capabilities
—
Specialized interpreter and expression typing components for Scala 2.11 compatibility fixes and enhanced import handling.
Scala 2.11 specific interpreter with import handling fixes for REPL functionality.
/**
* Scala 2.11 specific interpreter that extends IMain with Spark-specific fixes
* Addresses import resolution bugs and flattened symbol issues in Scala 2.11
* @param settings Scala compiler settings
* @param out Output writer for interpreter results
*/
class SparkILoopInterpreter(settings: Settings, out: JPrintWriter) extends IMain(settings, out) {
/** Custom member handlers with Spark-specific import handling */
lazy val memberHandlers: MemberHandlers
/** Expression typer instance for symbol and type resolution */
object expressionTyper extends SparkExprTyper
/**
* Get symbol for a line of code
* Overrides IMain implementation with enhanced symbol resolution
* @param code Source code line
* @return Symbol representing the code line
*/
override def symbolOfLine(code: String): global.Symbol
/**
* Get type of an expression with optional silent mode
* @param expr Expression string to type
* @param silent Whether to suppress error messages
* @return Type of the expression
*/
override def typeOfExpression(expr: String, silent: Boolean): global.Type
/**
* Generate import code for request wrapper
* Custom implementation to handle Scala 2.11 import bugs
* @param wanted Set of names that need to be imported
* @param wrapper Request wrapper for import context
* @param definesClass Whether the request defines a class
* @param generousImports Whether to include generous imports
* @return Computed imports with header, code, and access path
*/
override def importsCode(
wanted: Set[Name],
wrapper: Request#Wrapper,
definesClass: Boolean,
generousImports: Boolean
): ComputedImports
}Usage Examples:
import org.apache.spark.repl.SparkILoopInterpreter
import scala.tools.nsc.Settings
import scala.tools.nsc.interpreter.JPrintWriter
import java.io.StringWriter
// Create Scala 2.11 compatible interpreter
val settings = new Settings
val output = new StringWriter
val interpreter = new SparkILoopInterpreter(settings, new JPrintWriter(output))
// Type expressions
val exprType = interpreter.typeOfExpression("List(1, 2, 3)", silent = false)
// Get symbols for code lines
val symbol = interpreter.symbolOfLine("val x = 42")Expression typing trait that provides enhanced symbol and type resolution for REPL expressions.
/**
* Expression typing support for Spark REPL with proper phase management
* Extends ExprTyper with REPL-specific functionality
*/
trait SparkExprTyper extends ExprTyper {
/** Reference to the REPL interpreter */
val repl: SparkILoopInterpreter
/**
* Interpret code while preserving compiler phase
* Ensures phase consistency during expression evaluation
* @param code Scala code to interpret
* @return Interpretation result
*/
def doInterpret(code: String): IR.Result
/**
* Get symbol for a line of code with enhanced resolution
* Tries multiple strategies: expression, definition, error handling
* @param code Source code line
* @return Symbol representing the code, NoSymbol if not found
*/
override def symbolOfLine(code: String): Symbol
}Usage Examples:
// SparkExprTyper is typically used through SparkILoopInterpreter
val interpreter = new SparkILoopInterpreter(settings, output)
val typer = interpreter.expressionTyper
// Interpret code with phase preservation
val result = typer.doInterpret("val data = sc.parallelize(1 to 10)")
// Get symbol with multiple resolution strategies
val symbol = typer.symbolOfLine("def process(x: Int) = x * 2")Custom import handler that addresses Scala 2.11 import resolution issues.
/**
* Spark-specific import handler for Scala 2.11
* Fixes issues with flattened symbols and wildcard imports
*/
class SparkImportHandler(imp: Import) extends ImportHandler(imp) {
/** Override target type resolution for better symbol handling */
override def targetType: Type
/** Check if selector is individual import (not wildcard) */
def isIndividualImport(s: ImportSelector): Boolean
/** Check if selector is wildcard import */
def isWildcardImport(s: ImportSelector): Boolean
/** Get importable symbols with their rename mappings */
lazy val importableSymbolsWithRenames: List[(Symbol, Name)]
/** Get individual (non-wildcard) symbols */
override lazy val individualSymbols: List[Symbol]
/** Get wildcard-imported symbols */
override lazy val wildcardSymbols: List[Symbol]
}The import handling system uses multiple strategies to resolve symbols:
Target Type Resolution:
Symbol Filtering:
Import Code Generation:
Critical for maintaining consistent compilation state during interpretation:
def doInterpret(code: String): IR.Result = {
// Preserve current compiler phase
val savedPhase = phase
try interpretSynthetic(code)
finally phase = savedPhase
}Multi-phase symbol resolution for robust code interpretation:
def symbolOfLine(code: String): Symbol = {
def asExpr(): Symbol = {
val name = freshInternalVarName()
val line = "def " + name + " = " + code
doInterpret(line) match {
case IR.Success =>
val sym0 = symbolOfTerm(name)
sym0.cloneSymbol setInfo exitingTyper(sym0.tpe_*.finalResultType)
case _ => NoSymbol
}
}
def asDefn(): Symbol = { /* definition resolution */ }
def asError(): Symbol = { /* error handling */ }
// Try strategies in order
beSilentDuring(asExpr()) orElse beSilentDuring(asDefn()) orElse asError()
}Scala 2.11 has issues with flattened symbols in imports:
private def isFlattenedSymbol(sym: Symbol): Boolean =
sym.owner.isPackageClass &&
sym.name.containsName(nme.NAME_JOIN_STRING) &&
sym.owner.info.member(sym.name.take(
safeIndexOf(sym.name, nme.NAME_JOIN_STRING))) != NoSymbolEnhanced string operations to handle edge cases:
private def safeIndexOf(name: Name, s: String): Int = fixIndexOf(name, pos(name, s))
private def fixIndexOf(name: Name, idx: Int): Int = if (idx == name.length) -1 else idxProper import context handling for nested scopes:
SparkILoopInterpreter is used conditionally based on Scala version:
override def createInterpreter(): Unit = {
if (isScala2_11) {
// Use SparkILoopInterpreter for Scala 2.11
val interpreterClass = Class.forName("org.apache.spark.repl.SparkILoopInterpreter")
intp = interpreterClass
.getDeclaredConstructor(Seq(classOf[Settings], classOf[JPrintWriter]): _*)
.newInstance(Seq(settings, out): _*)
.asInstanceOf[IMain]
} else {
// Use standard interpreter for newer Scala versions
super.createInterpreter()
}
}private val isScala2_11 = versionNumberString.startsWith("2.11")Robust error handling for interpretation failures:
case IR.Success => /* handle success */
case _ => NoSymbol // Handle interpretation failuresGraceful fallback for symbol resolution issues:
beSilentDuring(asExpr()) orElse beSilentDuring(asDefn()) orElse asError()Safe handling of import resolution problems:
Install with Tessl CLI
npx tessl i tessl/maven-org-apache-spark--spark-repl-2-11