or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

connection-management.mdconnection-pooling.mdcopy-operations.mdcursor-operations.mdexception-handling.mdindex.mdlisteners-notifications.mdprepared-statements.mdquery-execution.mdtransaction-management.mdtype-system.md

cursor-operations.mddocs/

0

# Cursor Operations

1

2

Scrollable cursors for efficient traversal of large result sets with configurable prefetch and memory management. Cursors provide an iterator-based interface for handling query results that are too large to fit in memory at once.

3

4

## Capabilities

5

6

### Cursor Creation

7

8

Create cursors for query results with configurable prefetch settings and memory management.

9

10

```python { .api }

11

def cursor(

12

self,

13

query: str,

14

*args,

15

prefetch: int = None,

16

timeout: float = None,

17

record_class: type = None

18

) -> CursorFactory

19

```

20

21

**Parameters:**

22

- `query`: SQL query string with optional parameter placeholders

23

- `*args`: Query parameters

24

- `prefetch`: Number of rows to prefetch (default: 50)

25

- `timeout`: Query execution timeout in seconds

26

- `record_class`: Custom record class for query results

27

28

**Usage Example:**

29

```python

30

import asyncpg

31

32

async def example():

33

conn = await asyncpg.connect('postgresql://user:pass@localhost/db')

34

35

# Create a cursor for a large result set

36

cursor_factory = conn.cursor(

37

'SELECT * FROM large_table WHERE status = $1',

38

'active',

39

prefetch=100

40

)

41

42

# Iterate through results

43

async for row in cursor_factory:

44

process_row(row)

45

46

await conn.close()

47

```

48

49

### Async Context Manager

50

51

Use cursors as async context managers for automatic resource cleanup.

52

53

```python { .api }

54

async def __aenter__(self) -> Cursor

55

async def __aexit__(self, exc_type, exc_val, exc_tb) -> None

56

```

57

58

**Usage Example:**

59

```python

60

async with conn.cursor('SELECT * FROM users ORDER BY id') as cursor:

61

async for row in cursor:

62

print(f"User: {row['name']} ({row['email']})")

63

```

64

65

### Async Iterator Interface

66

67

Cursors implement the async iterator protocol for seamless integration with async for loops.

68

69

```python { .api }

70

def __aiter__(self) -> CursorIterator

71

async def __anext__(self) -> Record

72

```

73

74

**Usage Example:**

75

```python

76

cursor = conn.cursor('SELECT * FROM products WHERE category = $1', 'electronics')

77

78

async for product in cursor:

79

if product['price'] > 100:

80

print(f"Premium product: {product['name']}")

81

82

# Can break early to save resources

83

if product['id'] > 1000:

84

break

85

```

86

87

### Cursor Control

88

89

Control cursor behavior and retrieve cursor state information.

90

91

```python { .api }

92

async def forward(self, count: int) -> typing.List[Record]

93

async def backwards(self, count: int) -> typing.List[Record]

94

def get_prefetch_size(self) -> int

95

def get_query(self) -> str

96

def get_args(self) -> typing.Tuple

97

```

98

99

**Usage Example:**

100

```python

101

cursor = await conn.cursor('SELECT * FROM logs ORDER BY timestamp DESC')

102

103

# Move forward by specific amounts

104

recent_logs = await cursor.forward(10)

105

more_logs = await cursor.forward(20)

106

107

# Check cursor state

108

print(f"Prefetch size: {cursor.get_prefetch_size()}")

109

print(f"Query: {cursor.get_query()}")

110

```

111

112

## Types

113

114

```python { .api }

115

class CursorFactory:

116

"""Factory for creating cursors with async iteration support."""

117

118

def __aiter__(self) -> CursorIterator

119

def __await__(self) -> Cursor

120

async def __aenter__(self) -> Cursor

121

async def __aexit__(self, exc_type, exc_val, exc_tb) -> None

122

123

class Cursor:

124

"""A cursor for iterating through query results."""

125

126

async def forward(self, count: int) -> typing.List[Record]

127

async def backwards(self, count: int) -> typing.List[Record]

128

def get_prefetch_size(self) -> int

129

def get_query(self) -> str

130

def get_args(self) -> typing.Tuple

131

132

async def __aenter__(self) -> Cursor

133

async def __aexit__(self, exc_type, exc_val, exc_tb) -> None

134

135

class CursorIterator:

136

"""Async iterator for cursor results."""

137

138

def __aiter__(self) -> CursorIterator

139

async def __anext__(self) -> Record

140

```

141

142

## Memory Management

143

144

Cursors provide efficient memory usage for large result sets:

145

146

### Prefetch Configuration

147

148

Configure how many rows are fetched at once to balance memory usage and network efficiency:

149

150

```python

151

# Small prefetch for memory-constrained environments

152

cursor = conn.cursor('SELECT * FROM huge_table', prefetch=10)

153

154

# Large prefetch for high-throughput scenarios

155

cursor = conn.cursor('SELECT * FROM logs', prefetch=1000)

156

157

# Default prefetch (typically 50 rows)

158

cursor = conn.cursor('SELECT * FROM users')

159

```

160

161

### Streaming Processing

162

163

Process results without loading entire result set into memory:

164

165

```python

166

async def process_large_dataset():

167

async with conn.cursor('SELECT * FROM analytics_data') as cursor:

168

batch = []

169

170

async for row in cursor:

171

batch.append(row)

172

173

# Process in batches to control memory usage

174

if len(batch) >= 100:

175

await process_batch(batch)

176

batch.clear()

177

178

# Process remaining rows

179

if batch:

180

await process_batch(batch)

181

```

182

183

## Performance Considerations

184

185

1. **Prefetch Size**: Balance between memory usage and network round-trips

186

2. **Early Termination**: Break from iteration loops when possible to free resources

187

3. **Context Management**: Use async context managers for automatic cleanup

188

4. **Batch Processing**: Process results in batches for better memory efficiency

189

5. **Connection Pooling**: Cursors hold connections, so use pools appropriately

190

191

## Use Cases

192

193

- **Large Reports**: Generate reports from large datasets without memory exhaustion

194

- **Data Migration**: Stream data between systems efficiently

195

- **ETL Operations**: Transform data in streaming fashion

196

- **Search Results**: Paginate through search results efficiently

197

- **Log Analysis**: Process log files that exceed available memory

198

199

## Best Practices

200

201

1. **Always Use Context Managers**: Ensure proper resource cleanup

202

2. **Configure Prefetch Appropriately**: Match prefetch size to your use case

203

3. **Process Incrementally**: Avoid accumulating all results in memory

204

4. **Handle Exceptions**: Proper error handling prevents resource leaks

205

5. **Monitor Memory Usage**: Tune prefetch size based on memory constraints