or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

cli.mdcomplexity.mdhalstead.mdindex.mdmaintainability.mdraw-metrics.mdvisitors.md

raw-metrics.mddocs/

0

# Raw Code Metrics

1

2

Analysis of basic code metrics including lines of code (LOC), logical lines of code (LLOC), source lines of code (SLOC), comments, blank lines, and multi-line strings. Provides comprehensive code statistics for understanding codebase structure and composition.

3

4

## Capabilities

5

6

### Main Analysis Function

7

8

Primary function for analyzing raw code metrics from source code.

9

10

```python { .api }

11

def analyze(source):

12

"""

13

Analyze raw metrics from Python source code.

14

15

Computes comprehensive code statistics including:

16

- Total lines of code (LOC)

17

- Logical lines of code (LLOC) - executable statements

18

- Source lines of code (SLOC) - non-blank, non-comment lines

19

- Comment lines

20

- Multi-line strings (typically docstrings)

21

- Blank lines

22

- Single-line comments

23

24

Parameters:

25

- source (str): Python source code to analyze

26

27

Returns:

28

Module: Named tuple with fields (loc, lloc, sloc, comments, multi, blank, single_comments)

29

"""

30

```

31

32

### Token Processing Utilities

33

34

Low-level utilities for processing Python tokens during analysis.

35

36

```python { .api }

37

def _generate(code):

38

"""

39

Pass code into tokenize.generate_tokens and convert to list.

40

41

Parameters:

42

- code (str): Python source code

43

44

Returns:

45

list: List of token tuples from tokenize.generate_tokens

46

"""

47

48

def _fewer_tokens(tokens, remove):

49

"""

50

Process tokenize output removing specified token types.

51

52

Parameters:

53

- tokens (list): List of token tuples

54

- remove (set): Set of token types to remove

55

56

Yields:

57

tuple: Token tuples not in remove set

58

"""

59

60

def _find(tokens, token, value):

61

"""

62

Find position of last token with specified (token, value) pair.

63

64

Parameters:

65

- tokens (list): List of token tuples

66

- token (int): Token type to find

67

- value (str): Token value to find

68

69

Returns:

70

int: Position of rightmost matching token

71

72

Raises:

73

ValueError: If (token, value) pair not found

74

"""

75

76

def _logical(tokens):

77

"""

78

Find number of logical lines from token list.

79

80

Parameters:

81

- tokens (list): List of token tuples

82

83

Returns:

84

int: Number of logical lines (executable statements)

85

"""

86

```

87

88

### Token Type Constants

89

90

Constants for token types used in analysis, re-exported from Python's tokenize module.

91

92

```python { .api }

93

# Token type constants

94

OP = tokenize.OP # Operators

95

COMMENT = tokenize.COMMENT # Comment tokens

96

NL = tokenize.NL # Newline tokens

97

NEWLINE = tokenize.NEWLINE # Statement-ending newlines

98

EM = tokenize.ENDMARKER # End of file marker

99

100

# Helper for extracting token numbers

101

TOKEN_NUMBER = operator.itemgetter(0) # Extract token type from token tuple

102

```

103

104

### Result Data Type

105

106

Named tuple containing all raw metrics results.

107

108

```python { .api }

109

Module = namedtuple('Module', [

110

'loc', # Lines of Code - total lines in the file

111

'lloc', # Logical Lines of Code - executable statements

112

'sloc', # Source Lines of Code - non-blank, non-comment lines

113

'comments', # Comment lines (single and multi-line comments)

114

'multi', # Multi-line strings (typically docstrings)

115

'blank', # Blank lines (whitespace-only or empty lines)

116

'single_comments' # Single-line comments or docstrings

117

])

118

```

119

120

## Usage Examples

121

122

### Basic Raw Metrics Analysis

123

124

```python

125

from radon.raw import analyze

126

127

code = '''

128

"""

129

This is a module docstring.

130

It spans multiple lines.

131

"""

132

133

def calculate_sum(a, b):

134

"""Calculate the sum of two numbers.""" # Single-line docstring

135

# This is a comment

136

result = a + b # Another comment

137

return result

138

139

# Standalone comment

140

def empty_function():

141

pass

142

143

'''

144

145

# Analyze the code

146

metrics = analyze(code)

147

148

print(f"Total lines (LOC): {metrics.loc}")

149

print(f"Logical lines (LLOC): {metrics.lloc}")

150

print(f"Source lines (SLOC): {metrics.sloc}")

151

print(f"Comment lines: {metrics.comments}")

152

print(f"Multi-line strings: {metrics.multi}")

153

print(f"Blank lines: {metrics.blank}")

154

print(f"Single comments: {metrics.single_comments}")

155

156

# Output:

157

# Total lines (LOC): 15

158

# Logical lines (LLOC): 4

159

# Source lines (SLOC): 8

160

# Comment lines: 2

161

# Multi-line strings: 1

162

# Blank lines: 3

163

# Single comments: 1

164

```

165

166

### Comparing Code Quality Metrics

167

168

```python

169

from radon.raw import analyze

170

171

# Analyze well-documented code

172

documented_code = '''

173

"""

174

Module for mathematical operations.

175

Provides basic arithmetic functions.

176

"""

177

178

def add(a, b):

179

"""

180

Add two numbers together.

181

182

Args:

183

a: First number

184

b: Second number

185

186

Returns:

187

Sum of a and b

188

"""

189

return a + b

190

'''

191

192

# Analyze poorly documented code

193

undocumented_code = '''

194

def add(a,b):

195

return a+b

196

def sub(a,b):

197

return a-b

198

def mul(a,b):

199

return a*b

200

'''

201

202

doc_metrics = analyze(documented_code)

203

undoc_metrics = analyze(undocumented_code)

204

205

print("Well-documented code:")

206

print(f" SLOC: {doc_metrics.sloc}, Comments: {doc_metrics.comments}")

207

print(f" Comment ratio: {doc_metrics.comments / doc_metrics.sloc:.2%}")

208

209

print("Poorly documented code:")

210

print(f" SLOC: {undoc_metrics.sloc}, Comments: {undoc_metrics.comments}")

211

print(f" Comment ratio: {undoc_metrics.comments / undoc_metrics.sloc:.2%}")

212

```

213

214

### Processing Multiple Files

215

216

```python

217

from radon.raw import analyze

218

import os

219

220

def analyze_directory(path):

221

"""Analyze raw metrics for all Python files in directory."""

222

total_metrics = {

223

'loc': 0, 'lloc': 0, 'sloc': 0,

224

'comments': 0, 'multi': 0, 'blank': 0, 'single_comments': 0

225

}

226

227

for filename in os.listdir(path):

228

if filename.endswith('.py'):

229

with open(os.path.join(path, filename)) as f:

230

code = f.read()

231

metrics = analyze(code)

232

233

# Accumulate metrics

234

for field in total_metrics:

235

total_metrics[field] += getattr(metrics, field)

236

237

print(f"{filename}: {metrics.sloc} SLOC, {metrics.comments} comments")

238

239

print(f"Total: {total_metrics['sloc']} SLOC, {total_metrics['comments']} comments")

240

return total_metrics

241

242

# Usage

243

# total = analyze_directory('./src')

244

```

245

246

### Token-Level Analysis

247

248

```python

249

from radon.raw import _generate, _fewer_tokens, COMMENT, NL

250

251

code = '''

252

def example(): # Comment

253

x = 1

254

# Another comment

255

return x

256

'''

257

258

# Get all tokens

259

tokens = _generate(code)

260

print(f"Total tokens: {len(tokens)}")

261

262

# Filter out comments and newlines

263

filtered = list(_fewer_tokens(tokens, {COMMENT, NL}))

264

print(f"Tokens without comments/newlines: {len(filtered)}")

265

266

# Examine token types

267

for i, token in enumerate(tokens[:10]): # First 10 tokens

268

token_type, value, start, end, line = token

269

print(f"Token {i}: type={token_type}, value='{value}', line={start[0]}")

270

```

271

272

## Integration with Other Modules

273

274

Raw metrics integrate with other radon analysis capabilities:

275

276

### Maintainability Index Calculation

277

278

```python

279

from radon.raw import analyze

280

from radon.metrics import mi_compute

281

from radon.complexity import cc_visit, average_complexity

282

283

code = '''

284

def complex_function(a, b, c):

285

if a > 0:

286

if b > 0:

287

return a + b + c

288

else:

289

return a + c

290

else:

291

return c

292

'''

293

294

# Get raw metrics

295

raw = analyze(code)

296

297

# Get complexity

298

blocks = cc_visit(code)

299

avg_complexity = average_complexity(blocks)

300

301

# Calculate maintainability index (simplified)

302

# Note: This is a simplified example - actual MI calculation requires Halstead volume

303

print(f"SLOC: {raw.sloc}")

304

print(f"Comments: {raw.comments}")

305

print(f"Average Complexity: {avg_complexity}")

306

```

307

308

## Error Handling

309

310

The raw metrics module handles various edge cases:

311

312

- **Empty code**: Returns Module with all zero values

313

- **Invalid Python syntax**: Token parsing may fail for malformed code

314

- **Encoding issues**: Assumes UTF-8 encoding for source analysis

315

- **Token search failures**: `_find()` raises ValueError for missing tokens

316

317

## Integration with CLI

318

319

The raw metrics module integrates with radon's command-line interface:

320

321

```bash

322

# Command-line equivalent of analyze()

323

radon raw path/to/code.py

324

325

# Summary output

326

radon raw --summary path/to/code.py

327

328

# JSON output for programmatic processing

329

radon raw --json path/to/code.py

330

```