or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

file-writer.mdglobal-writer.mdindex.mdrecord-writer.mdsummary-writer.mdtorchvis.mdutilities.md

global-writer.mddocs/

0

# Global Writer

1

2

Thread-safe writer with automatic step incrementing for concurrent logging across processes and threads. Simplifies multi-threaded experiment tracking by eliminating manual step management and providing process-safe singleton access.

3

4

## Capabilities

5

6

### Initialization

7

8

Creates a GlobalSummaryWriter instance with thread-safe configuration and automatic step management.

9

10

```python { .api }

11

class GlobalSummaryWriter:

12

def __init__(

13

self,

14

logdir: Optional[str] = None,

15

comment: str = '',

16

purge_step: Optional[int] = None,

17

max_queue: int = 10,

18

flush_secs: int = 120,

19

filename_suffix: str = '',

20

write_to_disk: bool = True,

21

log_dir: Optional[str] = None,

22

coalesce_process: bool = True

23

):

24

"""

25

Creates a GlobalSummaryWriter for thread-safe logging.

26

27

Parameters:

28

- logdir: Save directory location (default creates timestamped directory)

29

- comment: Comment suffix for logdir

30

- purge_step: Step to purge crashed events from

31

- max_queue: Queue size for pending events (default: 10)

32

- flush_secs: Seconds between flushes (default: 120)

33

- filename_suffix: Suffix for event filenames

34

- write_to_disk: Whether to write files to disk

35

- log_dir: Deprecated alias for logdir

36

- coalesce_process: Whether to coalesce events from same process

37

"""

38

```

39

40

### Auto-Incrementing Logging

41

42

Log data with automatic step incrementing, eliminating the need for manual step management in concurrent environments.

43

44

```python { .api }

45

def add_scalar(

46

self,

47

tag: str,

48

scalar_value,

49

walltime: Optional[float] = None

50

):

51

"""

52

Add scalar data with automatic step incrementing.

53

54

Parameters:

55

- tag: Data identifier (e.g., 'Loss/Train')

56

- scalar_value: Value to record (float, int, or 0-d tensor)

57

- walltime: Timestamp (uses current time if None)

58

"""

59

60

def add_image(

61

self,

62

tag: str,

63

img_tensor,

64

walltime: Optional[float] = None,

65

dataformats: str = 'CHW'

66

):

67

"""

68

Add image data with automatic step incrementing.

69

70

Parameters:

71

- tag: Data identifier

72

- img_tensor: Image tensor (torch.Tensor, numpy.ndarray, or PIL Image)

73

- walltime: Timestamp (uses current time if None)

74

- dataformats: Tensor format ('CHW', 'HWC', 'HW')

75

"""

76

77

def add_text(

78

self,

79

tag: str,

80

text_string: str,

81

walltime: Optional[float] = None

82

):

83

"""

84

Add text data with automatic step incrementing.

85

86

Parameters:

87

- tag: Data identifier

88

- text_string: Text content (supports markdown)

89

- walltime: Timestamp (uses current time if None)

90

"""

91

```

92

93

### Writer Management

94

95

Control writer lifecycle and access the singleton instance across processes.

96

97

```python { .api }

98

def close(self):

99

"""

100

Close the writer and flush all data to disk.

101

"""

102

103

@staticmethod

104

def getSummaryWriter() -> 'GlobalSummaryWriter':

105

"""

106

Get the global writer singleton instance.

107

Creates a new instance if none exists.

108

109

Returns:

110

GlobalSummaryWriter: The global writer instance

111

"""

112

```

113

114

## Usage Examples

115

116

### Multi-Threaded Logging

117

118

```python

119

import threading

120

from tensorboardX import GlobalSummaryWriter

121

import time

122

import random

123

124

def worker_function(worker_id):

125

"""Worker function that logs data from multiple threads."""

126

writer = GlobalSummaryWriter.getSummaryWriter()

127

128

for i in range(10):

129

# Each worker logs independently with auto-incrementing steps

130

loss = random.random()

131

accuracy = random.random()

132

133

writer.add_scalar(f'Worker_{worker_id}/Loss', loss)

134

writer.add_scalar(f'Worker_{worker_id}/Accuracy', accuracy)

135

136

time.sleep(0.1)

137

138

# Create multiple threads

139

threads = []

140

for worker_id in range(5):

141

thread = threading.Thread(target=worker_function, args=(worker_id,))

142

threads.append(thread)

143

thread.start()

144

145

# Wait for all threads to complete

146

for thread in threads:

147

thread.join()

148

149

# Close the global writer

150

GlobalSummaryWriter.getSummaryWriter().close()

151

```

152

153

### Multi-Process Logging

154

155

```python

156

import multiprocessing

157

from tensorboardX import GlobalSummaryWriter

158

import time

159

import random

160

161

def process_function(process_id):

162

"""Process function that logs data from multiple processes."""

163

# Each process gets its own writer instance

164

writer = GlobalSummaryWriter(

165

logdir=f'logs/multiprocess',

166

comment=f'_process_{process_id}',

167

coalesce_process=True

168

)

169

170

for i in range(20):

171

metrics = {

172

'loss': random.random(),

173

'accuracy': random.random(),

174

'learning_rate': 0.01 * (0.9 ** i)

175

}

176

177

for metric_name, value in metrics.items():

178

writer.add_scalar(f'Process_{process_id}/{metric_name}', value)

179

180

time.sleep(0.05)

181

182

writer.close()

183

184

if __name__ == '__main__':

185

# Create multiple processes

186

processes = []

187

for process_id in range(3):

188

process = multiprocessing.Process(target=process_function, args=(process_id,))

189

processes.append(process)

190

process.start()

191

192

# Wait for all processes to complete

193

for process in processes:

194

process.join()

195

```

196

197

### Singleton Pattern Usage

198

199

```python

200

from tensorboardX import GlobalSummaryWriter

201

202

# Initialize global writer once

203

def initialize_logging():

204

writer = GlobalSummaryWriter(

205

logdir='logs/singleton_experiment',

206

comment='_global_logging'

207

)

208

return writer

209

210

# Use anywhere in the codebase

211

def train_model():

212

writer = GlobalSummaryWriter.getSummaryWriter()

213

214

for epoch in range(100):

215

loss = train_one_epoch()

216

writer.add_scalar('Training/Loss', loss)

217

218

def validate_model():

219

writer = GlobalSummaryWriter.getSummaryWriter()

220

221

accuracy = run_validation()

222

writer.add_scalar('Validation/Accuracy', accuracy)

223

224

# Initialize once at the start

225

initialize_logging()

226

227

# Use throughout the application

228

train_model()

229

validate_model()

230

231

# Close when done

232

GlobalSummaryWriter.getSummaryWriter().close()

233

```

234

235

### Automatic Step Management

236

237

```python

238

from tensorboardX import GlobalSummaryWriter

239

import time

240

241

# Create writer with automatic step management

242

writer = GlobalSummaryWriter('logs/auto_steps')

243

244

# Log data without specifying steps - they auto-increment

245

for i in range(50):

246

# Steps automatically increment: 0, 1, 2, 3, ...

247

writer.add_scalar('Metric_A', i * 0.1)

248

writer.add_scalar('Metric_B', i * 0.2)

249

250

# Even with different timing, steps stay synchronized

251

if i % 5 == 0:

252

writer.add_scalar('Periodic_Metric', i)

253

254

time.sleep(0.1)

255

256

writer.close()

257

```

258

259

## Thread Safety Features

260

261

- **Automatic Step Management**: Steps increment atomically across threads

262

- **Process Coalescing**: Events from the same process can be coalesced for efficiency

263

- **Singleton Access**: `getSummaryWriter()` provides thread-safe singleton access

264

- **Queue Management**: Thread-safe event queuing and flushing

265

266

## Configuration Options

267

268

### Process Coalescing

269

270

Control how events from multiple processes are handled:

271

272

```python

273

# Coalesce events from same process (default: True)

274

writer = GlobalSummaryWriter(coalesce_process=True)

275

276

# Keep separate event streams per process

277

writer = GlobalSummaryWriter(coalesce_process=False)

278

```

279

280

### Directory Organization

281

282

Organize logs across multiple processes and experiments:

283

284

```python

285

# Base directory with process-specific comments

286

writer = GlobalSummaryWriter(

287

logdir='logs/multi_process_experiment',

288

comment=f'_process_{os.getpid()}'

289

)

290

291

# Automatic timestamped directories

292

writer = GlobalSummaryWriter() # Creates runs/DATETIME_HOSTNAME

293

```