or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

cli-operations.mdenvironment-management.mdindex.mdoperation-management.mdserver-management.mdservice-management.mdsession-management.mdsql-execution.mdui-components.md

sql-execution.mddocs/

0

# SQL Execution

1

2

SQL statement execution engine with result processing, schema management, and query lifecycle handling.

3

4

## Capabilities

5

6

### SparkSQLDriver

7

8

SQL execution driver that processes SQL commands through Spark SQL and returns results in Hive-compatible format.

9

10

```scala { .api }

11

/**

12

* Driver for executing Spark SQL statements with Hive compatibility

13

* @param context SQL context for execution (defaults to SparkSQLEnv.sqlContext)

14

*/

15

private[hive] class SparkSQLDriver(val context: SQLContext = SparkSQLEnv.sqlContext)

16

extends Driver with Logging {

17

18

/**

19

* Initialize the driver (no-op in current implementation)

20

*/

21

override def init(): Unit

22

23

/**

24

* Execute a SQL command and return the response

25

* @param command SQL command string

26

* @return CommandProcessorResponse with result code and metadata

27

*/

28

override def run(command: String): CommandProcessorResponse

29

30

/**

31

* Close the driver and cleanup resources

32

* @return Status code (0 for success)

33

*/

34

override def close(): Int

35

36

/**

37

* Get query results as strings

38

* @param res List to populate with result strings

39

* @return true if results were available, false otherwise

40

*/

41

override def getResults(res: JList[_]): Boolean

42

43

/**

44

* Get the schema of the last executed query

45

* @return Schema object describing result structure

46

*/

47

override def getSchema: Schema

48

49

/**

50

* Destroy the driver and cleanup all resources

51

*/

52

override def destroy(): Unit

53

}

54

```

55

56

**Usage Example:**

57

58

```scala

59

import org.apache.spark.sql.hive.thriftserver.{SparkSQLDriver, SparkSQLEnv}

60

import java.util.ArrayList

61

62

// Initialize environment

63

SparkSQLEnv.init()

64

65

// Create and initialize driver

66

val driver = new SparkSQLDriver()

67

driver.init()

68

69

// Execute a query

70

val response = driver.run("SELECT name, age FROM users WHERE age > 21")

71

72

if (response.getResponseCode == 0) {

73

// Get results

74

val results = new ArrayList[String]()

75

if (driver.getResults(results)) {

76

results.forEach(println)

77

}

78

79

// Get schema information

80

val schema = driver.getSchema

81

println(s"Schema: ${schema.getFieldSchemas}")

82

} else {

83

println(s"Query failed: ${response.getErrorMessage}")

84

}

85

86

// Cleanup

87

driver.close()

88

driver.destroy()

89

```

90

91

## Types

92

93

### Required Imports

94

95

```scala { .api }

96

import java.util.{ArrayList => JArrayList, Arrays, List => JList}

97

import org.apache.hadoop.hive.metastore.api.{FieldSchema, Schema}

98

import org.apache.hadoop.hive.ql.Driver

99

import org.apache.hadoop.hive.ql.processors.CommandProcessorResponse

100

import org.apache.spark.sql.{AnalysisException, SQLContext}

101

```

102

103

### CommandProcessorResponse

104

105

Standard Hive command processor response structure:

106

107

```scala { .api }

108

// Response codes:

109

// 0 = Success

110

// 1 = Error (with error message and exception details)

111

class CommandProcessorResponse(

112

responseCode: Int,

113

errorMessage: String = null,

114

sqlState: String = null,

115

exception: Throwable = null

116

)

117

```