or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

client-configuration.mdcomments-annotations.mdcommon-types.mddatasets.mdexceptions.mdhealth.mdindex.mdingestion.mdmedia.mdmetrics.mdmodels.mdpagination.mdprojects-organizations.mdprompts.mdscim.mdscores.mdsessions.mdtraces-observations.md

pagination.mddocs/

0

# Pagination

1

2

The Langfuse Java client provides consistent pagination across all list endpoints using the `MetaResponse` type. This enables efficient retrieval of large result sets.

3

4

## Pagination Metadata

5

6

### MetaResponse

7

8

Standard pagination metadata included in all paginated responses.

9

10

**Import:** `import com.langfuse.client.resources.utils.pagination.types.MetaResponse;`

11

12

```java { .api }

13

/**

14

* Pagination metadata for list responses

15

*/

16

public final class MetaResponse {

17

/**

18

* Current page number (1-based)

19

*/

20

int getPage();

21

22

/**

23

* Number of items per page

24

*/

25

int getLimit();

26

27

/**

28

* Total number of items given current filters

29

*/

30

int getTotalItems();

31

32

/**

33

* Total number of pages given current limit

34

*/

35

int getTotalPages();

36

37

static Builder builder();

38

}

39

```

40

41

## Paginated Response Types

42

43

All list endpoints return responses with data and meta fields:

44

45

```java

46

PaginatedXyz {

47

List<Xyz> getData(); // Current page of items

48

MetaResponse getMeta(); // Pagination metadata

49

}

50

```

51

52

Examples:

53

- `PaginatedDatasets`

54

- `PaginatedSessions`

55

- `PaginatedModels`

56

- `PaginatedAnnotationQueues`

57

- `Traces` (with meta)

58

59

## Pagination Parameters

60

61

List requests typically support these parameters:

62

63

```java

64

XyzRequest.builder()

65

.page(1) // Page number (default: 1, 1-based)

66

.limit(50) // Items per page (default: varies by endpoint, usually 50)

67

.build();

68

```

69

70

## Basic Pagination Example

71

72

```java

73

import com.langfuse.client.LangfuseClient;

74

import com.langfuse.client.resources.datasets.types.*;

75

76

LangfuseClient client = LangfuseClient.builder()

77

.url("https://cloud.langfuse.com")

78

.credentials("pk-lf-...", "sk-lf-...")

79

.build();

80

81

// Get first page

82

GetDatasetsRequest request = GetDatasetsRequest.builder()

83

.page(1)

84

.limit(10)

85

.build();

86

87

PaginatedDatasets firstPage = client.datasets().list(request);

88

89

System.out.println("Total items: " + firstPage.getMeta().getTotalItems());

90

System.out.println("Total pages: " + firstPage.getMeta().getTotalPages());

91

System.out.println("Current page: " + firstPage.getMeta().getPage());

92

System.out.println("Items on this page: " + firstPage.getData().size());

93

```

94

95

## Iterating Through Pages

96

97

### Sequential Pagination

98

99

```java

100

int currentPage = 1;

101

int pageSize = 50;

102

103

while (true) {

104

GetTracesRequest request = GetTracesRequest.builder()

105

.page(currentPage)

106

.limit(pageSize)

107

.build();

108

109

Traces traces = client.trace().list(request);

110

111

// Process current page

112

for (Trace trace : traces.getData()) {

113

System.out.println("Trace: " + trace.getId());

114

}

115

116

// Check if more pages exist

117

if (currentPage >= traces.getMeta().getTotalPages()) {

118

break;

119

}

120

121

currentPage++;

122

}

123

```

124

125

### Pagination Helper

126

127

```java

128

import java.util.List;

129

import java.util.ArrayList;

130

import java.util.function.Function;

131

132

public class PaginationHelper {

133

134

/**

135

* Fetch all items across all pages

136

*/

137

public static <T, R> List<T> fetchAll(

138

Function<Integer, R> fetcher,

139

Function<R, List<T>> dataExtractor,

140

Function<R, MetaResponse> metaExtractor

141

) {

142

List<T> allItems = new ArrayList<>();

143

int currentPage = 1;

144

145

while (true) {

146

R response = fetcher.apply(currentPage);

147

List<T> pageData = dataExtractor.apply(response);

148

MetaResponse meta = metaExtractor.apply(response);

149

150

allItems.addAll(pageData);

151

152

if (currentPage >= meta.getTotalPages()) {

153

break;

154

}

155

156

currentPage++;

157

}

158

159

return allItems;

160

}

161

}

162

163

// Usage

164

List<Dataset> allDatasets = PaginationHelper.fetchAll(

165

page -> {

166

GetDatasetsRequest req = GetDatasetsRequest.builder()

167

.page(page)

168

.limit(100)

169

.build();

170

return client.datasets().list(req);

171

},

172

PaginatedDatasets::getData,

173

PaginatedDatasets::getMeta

174

);

175

```

176

177

### Stream-Based Pagination

178

179

```java

180

import java.util.Iterator;

181

import java.util.NoSuchElementException;

182

import java.util.stream.Stream;

183

import java.util.stream.StreamSupport;

184

185

public class PaginatedIterator<T> implements Iterator<T> {

186

private final LangfuseClient client;

187

private final int pageSize;

188

private int currentPage = 1;

189

private int totalPages = Integer.MAX_VALUE;

190

private List<T> currentData = new ArrayList<>();

191

private int currentIndex = 0;

192

193

public PaginatedIterator(LangfuseClient client, int pageSize) {

194

this.client = client;

195

this.pageSize = pageSize;

196

fetchNextPage();

197

}

198

199

@Override

200

public boolean hasNext() {

201

if (currentIndex < currentData.size()) {

202

return true;

203

}

204

205

if (currentPage >= totalPages) {

206

return false;

207

}

208

209

currentPage++;

210

fetchNextPage();

211

return currentIndex < currentData.size();

212

}

213

214

@Override

215

public T next() {

216

if (!hasNext()) {

217

throw new NoSuchElementException();

218

}

219

return currentData.get(currentIndex++);

220

}

221

222

private void fetchNextPage() {

223

GetTracesRequest request = GetTracesRequest.builder()

224

.page(currentPage)

225

.limit(pageSize)

226

.build();

227

228

Traces response = client.trace().list(request);

229

currentData = (List<T>) response.getData();

230

currentIndex = 0;

231

totalPages = response.getMeta().getTotalPages();

232

}

233

234

public Stream<T> stream() {

235

return StreamSupport.stream(

236

Spliterators.spliteratorUnknownSize(this, Spliterator.ORDERED),

237

false

238

);

239

}

240

}

241

242

// Usage

243

PaginatedIterator<Trace> iterator = new PaginatedIterator<>(client, 100);

244

iterator.stream()

245

.filter(trace -> trace.getName().isPresent())

246

.forEach(trace -> System.out.println(trace.getName().get()));

247

```

248

249

## Page Size Recommendations

250

251

### Default Page Sizes

252

253

Most endpoints default to 50 items per page. Common limits:

254

255

```java

256

// Small page for quick response

257

.limit(10)

258

259

// Default page size

260

.limit(50)

261

262

// Large page for bulk processing

263

.limit(100)

264

```

265

266

### Choosing Page Size

267

268

- **Small pages (10-25)**: UI pagination, real-time updates

269

- **Medium pages (50-100)**: General purpose, balanced performance

270

- **Large pages (100+)**: Bulk exports, batch processing

271

272

## Complete Pagination Examples

273

274

### Fetching All Traces

275

276

```java

277

import com.langfuse.client.LangfuseClient;

278

import com.langfuse.client.resources.trace.types.*;

279

import java.util.List;

280

import java.util.ArrayList;

281

282

public class TraceExporter {

283

private final LangfuseClient client;

284

285

public TraceExporter(LangfuseClient client) {

286

this.client = client;

287

}

288

289

public List<Trace> exportAllTraces(String userId) {

290

List<Trace> allTraces = new ArrayList<>();

291

int currentPage = 1;

292

int pageSize = 100;

293

294

System.out.println("Exporting traces for user: " + userId);

295

296

while (true) {

297

GetTracesRequest request = GetTracesRequest.builder()

298

.userId(userId)

299

.page(currentPage)

300

.limit(pageSize)

301

.build();

302

303

Traces page = client.trace().list(request);

304

MetaResponse meta = page.getMeta();

305

306

allTraces.addAll(page.getData());

307

308

System.out.println(String.format(

309

"Fetched page %d/%d (%d traces)",

310

currentPage,

311

meta.getTotalPages(),

312

page.getData().size()

313

));

314

315

if (currentPage >= meta.getTotalPages()) {

316

break;

317

}

318

319

currentPage++;

320

}

321

322

System.out.println("Total traces exported: " + allTraces.size());

323

return allTraces;

324

}

325

}

326

```

327

328

### Parallel Page Processing

329

330

```java

331

import java.util.concurrent.*;

332

import java.util.stream.IntStream;

333

334

public class ParallelPagination {

335

336

public List<Trace> fetchAllParallel(LangfuseClient client) throws Exception {

337

// First, get total pages

338

GetTracesRequest initialRequest = GetTracesRequest.builder()

339

.page(1)

340

.limit(100)

341

.build();

342

343

Traces firstPage = client.trace().list(initialRequest);

344

int totalPages = firstPage.getMeta().getTotalPages();

345

346

// Fetch all pages in parallel

347

ExecutorService executor = Executors.newFixedThreadPool(10);

348

349

List<CompletableFuture<Traces>> futures = IntStream.rangeClosed(1, totalPages)

350

.mapToObj(page -> CompletableFuture.supplyAsync(() -> {

351

GetTracesRequest request = GetTracesRequest.builder()

352

.page(page)

353

.limit(100)

354

.build();

355

return client.trace().list(request);

356

}, executor))

357

.collect(Collectors.toList());

358

359

// Wait for all pages and collect results

360

List<Trace> allTraces = futures.stream()

361

.map(CompletableFuture::join)

362

.flatMap(traces -> traces.getData().stream())

363

.collect(Collectors.toList());

364

365

executor.shutdown();

366

return allTraces;

367

}

368

}

369

```

370

371

### Paginated Search

372

373

```java

374

public List<Trace> searchTraces(String searchTerm) {

375

List<Trace> results = new ArrayList<>();

376

int page = 1;

377

int maxResults = 500; // Limit total results

378

379

while (results.size() < maxResults) {

380

GetTracesRequest request = GetTracesRequest.builder()

381

.name(searchTerm)

382

.page(page)

383

.limit(100)

384

.build();

385

386

Traces traces = client.trace().list(request);

387

388

if (traces.getData().isEmpty()) {

389

break;

390

}

391

392

results.addAll(traces.getData());

393

394

if (page >= traces.getMeta().getTotalPages() ||

395

results.size() >= maxResults) {

396

break;

397

}

398

399

page++;

400

}

401

402

return results.subList(0, Math.min(results.size(), maxResults));

403

}

404

```

405

406

## Best Practices

407

408

1. **Start with Page 1**: Page numbers are 1-based, not 0-based

409

2. **Check Total Pages**: Always check `meta.getTotalPages()` to avoid unnecessary requests

410

3. **Handle Empty Pages**: Check `data.isEmpty()` for early termination

411

4. **Reasonable Page Size**: Use 50-100 items per page for most use cases

412

5. **Progress Tracking**: Show progress for long-running pagination

413

6. **Error Handling**: Handle errors for each page request

414

7. **Rate Limiting**: Add delays between requests if needed

415

8. **Parallel Caution**: Be careful with parallel pagination to avoid rate limits

416

417

## Pagination with Filters

418

419

Combine pagination with filters for efficient queries:

420

421

```java

422

GetObservationsRequest request = GetObservationsRequest.builder()

423

.type(ObservationType.GENERATION)

424

.fromStartTime("2025-10-01T00:00:00Z")

425

.toStartTime("2025-10-31T23:59:59Z")

426

.page(currentPage)

427

.limit(50)

428

.build();

429

430

ObservationsViews observations = client.observations().getMany(request);

431

```

432

433

## Performance Considerations

434

435

- **Network Latency**: Each page requires a network round-trip

436

- **Memory Usage**: Fetching all pages loads all data into memory

437

- **Processing Time**: Consider processing pages as they arrive

438

- **API Limits**: Respect rate limits when fetching many pages

439

440

## Related Documentation

441

442

- [Traces and Observations](./traces-observations.md) - Paginated trace queries

443

- [Datasets](./datasets.md) - Paginated dataset operations

444

- [Sessions](./sessions.md) - Paginated session queries

445

- [Common Types](./common-types.md) - Type definitions

446