CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/maven-org-apache-spark--yarn-parent-2-10

YARN integration support for Apache Spark cluster computing, enabling Spark applications to run on Hadoop YARN clusters

Pending
Overview
Eval results
Files

yarn-client.mddocs/

YARN Client Management

Core client functionality for submitting and managing Spark applications on YARN clusters. Provides comprehensive application lifecycle management, resource negotiation, and monitoring capabilities.

Capabilities

Client Class

Main YARN client implementation for application submission and management. Available in both stable and deprecated alpha API versions.

/**
 * YARN client implementation for stable API (Hadoop 2.2+)
 * Handles application submission, monitoring, and resource management
 */
class Client(
  args: ClientArguments, 
  hadoopConf: Configuration, 
  sparkConf: SparkConf
) extends ClientBase {
  /**
   * Stop the YARN client and clean up resources
   */
  def stop(): Unit
}

/**
 * Alternative constructors for simplified client creation
 */
object Client {
  def apply(clientArgs: ClientArguments, spConf: SparkConf): Client
  def apply(clientArgs: ClientArguments): Client
}

Usage Examples:

import org.apache.spark.deploy.yarn.{Client, ClientArguments}
import org.apache.spark.SparkConf
import org.apache.hadoop.conf.Configuration

// Full constructor with explicit Hadoop configuration
val sparkConf = new SparkConf().setAppName("MyApp")
val hadoopConf = new Configuration()
val args = Array("--jar", "myapp.jar", "--class", "MyMainClass")
val clientArgs = new ClientArguments(args, sparkConf)
val client = new Client(clientArgs, hadoopConf, sparkConf)

// Simplified constructor
val client2 = new Client(clientArgs, sparkConf)

// Stop client when done
client.stop()

ClientBase Trait

Base trait providing core YARN client functionality shared across API versions.

/**
 * Base trait for YARN client functionality
 * Provides core application submission logic and resource management
 */
private[spark] trait ClientBase {
  // Application submission and monitoring capabilities
  // Resource allocation and management  
  // YARN application lifecycle management
}

/**
 * Companion object with shared client utilities
 */
private[spark] object ClientBase {
  // Shared client utility methods and constants
}

ClientArguments

Configuration and argument parsing for YARN client operations. Handles all command-line arguments and configuration options for application submission.

/**
 * Client configuration and argument parsing for YARN operations
 * Parses command-line arguments and manages application submission parameters
 */
private[spark] class ClientArguments(args: Array[String], sparkConf: SparkConf) {
  /** Additional JARs to distribute with the application */
  var addJars: String = null
  
  /** Files to distribute to executor working directories */
  var files: String = null
  
  /** Archives to distribute and extract on executors */  
  var archives: String = null
  
  /** User application JAR file */
  var userJar: String = null
  
  /** User application main class */
  var userClass: String = null
  
  /** Arguments to pass to user application */
  var userArgs: Seq[String] = Seq[String]()
  
  /** Executor memory in MB (default: 1024) */
  var executorMemory: Int = 1024
  
  /** Number of cores per executor (default: 1) */
  var executorCores: Int = 1
  
  /** Total number of executors to request */
  var numExecutors: Int = DEFAULT_NUMBER_EXECUTORS
  
  /** YARN queue name (default: "default") */
  var amQueue: String = sparkConf.get("spark.yarn.queue", "default")
  
  /** ApplicationMaster memory in MB (default: 512) */
  var amMemory: Int = 512
  
  /** Application name (default: "Spark") */
  var appName: String = "Spark"
  
  /** Application priority (default: 0) */
  var priority: Int = 0
  
  /** Additional memory overhead for ApplicationMaster container */
  val amMemoryOverhead: Int = sparkConf.getInt("spark.yarn.driver.memoryOverhead", 
    math.max((MEMORY_OVERHEAD_FACTOR * amMemory).toInt, MEMORY_OVERHEAD_MIN))
  
  /** Additional memory overhead for executor containers */
  val executorMemoryOverhead: Int = sparkConf.getInt("spark.yarn.executor.memoryOverhead",
    math.max((MEMORY_OVERHEAD_FACTOR * executorMemory).toInt, MEMORY_OVERHEAD_MIN))
}

Usage Examples:

import org.apache.spark.deploy.yarn.ClientArguments
import org.apache.spark.SparkConf

// Basic argument configuration
val sparkConf = new SparkConf()
val args = Array(
  "--jar", "/path/to/myapp.jar",
  "--class", "com.example.MyMainClass",
  "--arg", "appArg1",
  "--arg", "appArg2",
  "--executor-memory", "2g",
  "--executor-cores", "2",
  "--num-executors", "4"
)

val clientArgs = new ClientArguments(args, sparkConf)

// Access parsed arguments
println(s"User JAR: ${clientArgs.userJar}")
println(s"Main class: ${clientArgs.userClass}")
println(s"Executor memory: ${clientArgs.executorMemory} MB")
println(s"Number of executors: ${clientArgs.numExecutors}")

Main Entry Points

Command-line entry points for YARN client operations.

/**
 * Main entry point for YARN client operations
 * Typically invoked by spark-submit in YARN mode
 */
object Client {
  def main(args: Array[String]): Unit
}

Configuration Options

Required Arguments

  • --jar: Path to user application JAR file
  • --class: Main class of user application

Optional Arguments

  • --arg <value>: Arguments to pass to user application (can be repeated)
  • --executor-memory <memory>: Memory per executor (e.g., "1g", "512m")
  • --executor-cores <cores>: CPU cores per executor
  • --num-executors <count>: Total number of executors
  • --queue <queue>: YARN queue name
  • --name <name>: Application name
  • --files <files>: Comma-separated list of files to distribute
  • --archives <archives>: Comma-separated list of archives to distribute
  • --addJars <jars>: Comma-separated list of additional JARs

Environment Integration

The client integrates with Spark configuration through SparkConf and Hadoop configuration through Configuration objects, allowing seamless integration with existing Spark and Hadoop setups.

// Configuration integration example
val sparkConf = new SparkConf()
  .setAppName("MySparkApp")
  .set("spark.executor.memory", "2g")
  .set("spark.yarn.queue", "production")

val hadoopConf = new Configuration()
hadoopConf.set("yarn.nodemanager.aux-services", "mapreduce_shuffle")

val client = new Client(clientArgs, hadoopConf, sparkConf)

Install with Tessl CLI

npx tessl i tessl/maven-org-apache-spark--yarn-parent-2-10

docs

application-master.md

index.md

resource-management.md

scheduler-backends.md

utilities.md

yarn-client.md

tile.json