or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

cli-operations.mdenvironment-management.mdindex.mdoperation-management.mdserver-management.mdservice-management.mdsession-management.mdsql-execution.mdui-components.md

cli-operations.mddocs/

0

# CLI Operations

1

2

Command-line interface functionality providing interactive SQL execution and Hive-compatible CLI operations.

3

4

## Capabilities

5

6

### SparkSQLCLIDriver Object

7

8

Main object providing the command-line interface for Spark SQL, similar to Hive CLI.

9

10

```scala { .api }

11

/**

12

* Main object for Spark SQL CLI operations

13

* Note: This code doesn't support remote connections in Hive 1.2+

14

*/

15

private[hive] object SparkSQLCLIDriver extends Logging {

16

/**

17

* Main method for CLI execution

18

* @param args Command line arguments

19

*/

20

def main(args: Array[String]): Unit

21

22

/**

23

* Install an interrupt callback to cancel all Spark jobs.

24

* Registers a signal handler for Ctrl+C detection.

25

*/

26

def installSignalHandler(): Unit

27

28

/**

29

* Check if running in remote mode

30

* @param state CLI session state

31

* @return true if remote mode, false otherwise

32

*/

33

def isRemoteMode(state: CliSessionState): Boolean

34

}

35

```

36

37

**Usage Example:**

38

39

```bash

40

# Start the Spark SQL CLI

41

spark-submit --class org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver \

42

--master local[2] \

43

spark-hive-thriftserver_2.12-2.4.8.jar

44

45

# CLI will start with interactive prompt

46

spark-sql> SELECT * FROM my_table LIMIT 10;

47

spark-sql> SHOW DATABASES;

48

spark-sql> USE my_database;

49

```

50

51

### SparkSQLCLIDriver Class

52

53

CLI driver class that extends Hive's CliDriver with Spark SQL functionality.

54

55

```scala { .api }

56

/**

57

* Spark SQL CLI driver extending Hive's CliDriver

58

*/

59

private[hive] class SparkSQLCLIDriver extends CliDriver with Logging {

60

/**

61

* Set Hive variables in the SQL context

62

* @param hiveVariables Map of variable names to values

63

*/

64

def setHiveVariables(hiveVariables: java.util.Map[String, String]): Unit

65

66

/**

67

* Print Spark master and application ID information

68

*/

69

def printMasterAndAppId(): Unit

70

71

/**

72

* Process a command and return the result code

73

* @param cmd Command string to process

74

* @return Result code (0 for success, non-zero for error)

75

*/

76

def processCmd(cmd: String): Int

77

}

78

```

79

80

### Command Processing

81

82

The CLI supports various types of commands:

83

84

**SQL Commands:**

85

```sql

86

-- Standard SQL queries

87

SELECT column1, column2 FROM table_name WHERE condition;

88

89

-- DDL operations

90

CREATE TABLE test (id INT, name STRING);

91

DROP TABLE IF EXISTS test;

92

93

-- DML operations

94

INSERT INTO table_name VALUES (1, 'example');

95

UPDATE table_name SET column1 = value WHERE condition;

96

```

97

98

**CLI-Specific Commands:**

99

```sql

100

-- Exit commands

101

quit;

102

exit;

103

104

-- File execution

105

source /path/to/script.sql;

106

107

-- System commands (prefixed with !)

108

!ls -la;

109

!pwd;

110

```

111

112

**Configuration Commands:**

113

```sql

114

-- Set configuration properties

115

SET spark.sql.adaptive.enabled=true;

116

SET hive.exec.dynamic.partition=true;

117

118

-- Reset configuration

119

RESET spark.sql.adaptive.enabled;

120

121

-- Show configuration

122

SET;

123

SET spark.sql.adaptive.enabled;

124

```

125

126

### Command Line Arguments

127

128

The CLI supports standard Hive CLI arguments:

129

130

**Database Selection:**

131

```bash

132

spark-submit --class SparkSQLCLIDriver ... --database my_database

133

```

134

135

**File Execution:**

136

```bash

137

spark-submit --class SparkSQLCLIDriver ... -f script.sql

138

```

139

140

**Inline Execution:**

141

```bash

142

spark-submit --class SparkSQLCLIDriver ... -e "SELECT COUNT(*) FROM my_table"

143

```

144

145

**Configuration Options:**

146

```bash

147

spark-submit --class SparkSQLCLIDriver ... \

148

--hiveconf hive.exec.dynamic.partition=true \

149

--hivevar myvar=myvalue

150

```

151

152

**Silent Mode:**

153

```bash

154

spark-submit --class SparkSQLCLIDriver ... -S

155

```

156

157

### Interactive Features

158

159

**Command History:**

160

- Maintains command history in `~/.hivehistory`

161

- Supports command line editing with JLine

162

- History persisted across sessions

163

164

**Auto-completion:**

165

- Tab completion for SQL keywords

166

- Table and column name completion

167

- Function name completion

168

169

**Multi-line Commands:**

170

- Commands can span multiple lines

171

- Terminated with semicolon

172

- Supports line continuation with backslash

173

174

**Prompt Customization:**

175

```

176

spark-sql> USE my_database;

177

spark-sql (my_database)> SELECT * FROM my_table;

178

```

179

180

### Integration with Spark

181

182

**Spark Context Integration:**

183

```scala

184

// CLI automatically initializes SparkSQLEnv

185

SparkSQLEnv.init()

186

187

// Master and application info displayed on startup

188

val master = SparkSQLEnv.sparkContext.master

189

val appId = SparkSQLEnv.sparkContext.applicationId

190

console.printInfo(s"Spark master: $master, Application Id: $appId")

191

```

192

193

**Job Management:**

194

- Sets job descriptions for SQL queries

195

- Supports job cancellation with Ctrl+C

196

- Integrates with Spark's job execution tracking

197

198

**Configuration Management:**

199

- Respects Spark configuration properties

200

- Supports Hive configuration compatibility

201

- Dynamic property updates during session

202

203

### Error Handling and Output

204

205

**Query Results:**

206

- Formatted tabular output for SELECT queries

207

- Configurable result display options

208

- Support for large result sets

209

210

**Error Reporting:**

211

```

212

Error in query: Table or view not found: non_existent_table

213

Analysis exception: cannot resolve 'invalid_column'

214

```

215

216

**Execution Timing:**

217

```

218

Time taken: 2.45 seconds, Fetched 1000 row(s)

219

```

220

221

**Verbose Mode:**

222

- Shows query execution plans

223

- Displays detailed timing information

224

- Provides debug output for troubleshooting

225

226

### Session State Management

227

228

The CLI maintains session state including:

229

230

- Current database context

231

- Set configuration properties

232

- Hive variables and substitutions

233

- Connection settings and credentials

234

- Temporary views and functions

235

236

**Session Configuration:**

237

```scala

238

// Session state is maintained per CLI instance

239

val sessionState = SessionState.get().asInstanceOf[CliSessionState]

240

241

// Database context

242

if (sessionState.database != null) {

243

SparkSQLEnv.sqlContext.sessionState.catalog.setCurrentDatabase(sessionState.database)

244

}

245

246

// Variable substitution

247

cli.setHiveVariables(oproc.getHiveVariables)

248

```