or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

agents.mdaudio.mdbatch.mdbeta.mdchat-completions.mdclassification.mdembeddings.mdfiles.mdfim.mdfine-tuning.mdindex.mdmodels.mdocr.md

fim.mddocs/

0

# Fill-in-the-Middle (FIM)

1

2

Generate code completions using fill-in-the-middle models for code editing and completion tasks. FIM is specialized for code generation where you have prefix and suffix context.

3

4

## Capabilities

5

6

### FIM Completion

7

8

Generate code completions with prefix and suffix context.

9

10

```python { .api }

11

def complete(

12

model: str,

13

prompt: str,

14

suffix: Optional[str] = None,

15

temperature: Optional[float] = None,

16

top_p: Optional[float] = None,

17

max_tokens: Optional[int] = None,

18

min_tokens: Optional[int] = None,

19

stream: Optional[bool] = None,

20

stop: Optional[Union[str, List[str]]] = None,

21

random_seed: Optional[int] = None,

22

**kwargs

23

) -> FIMCompletionResponse:

24

"""

25

Create a fill-in-the-middle completion.

26

27

Parameters:

28

- model: Model identifier (e.g., "codestral-latest")

29

- prompt: The code prefix before the insertion point

30

- suffix: The code suffix after the insertion point

31

- temperature: Sampling temperature (0.0 to 1.0)

32

- top_p: Nucleus sampling parameter

33

- max_tokens: Maximum tokens to generate

34

- min_tokens: Minimum tokens to generate

35

- stream: Enable streaming responses

36

- stop: Stop sequences for generation

37

- random_seed: Seed for reproducible outputs

38

39

Returns:

40

FIMCompletionResponse with generated code completion

41

"""

42

43

def stream(

44

model: str,

45

prompt: str,

46

suffix: Optional[str] = None,

47

**kwargs

48

) -> Iterator[CompletionChunk]:

49

"""

50

Stream a fill-in-the-middle completion.

51

52

Parameters:

53

- model: Model identifier

54

- prompt: The code prefix

55

- suffix: The code suffix

56

57

Returns:

58

Iterator of CompletionChunk objects with streaming completion

59

"""

60

```

61

62

## Usage Examples

63

64

### Basic Code Completion

65

66

```python

67

from mistralai import Mistral

68

69

client = Mistral(api_key="your-api-key")

70

71

# Complete a function

72

prefix = """

73

def calculate_fibonacci(n):

74

if n <= 1:

75

return n

76

# TODO: implement fibonacci calculation

77

"""

78

79

suffix = """

80

return result

81

"""

82

83

response = client.fim.complete(

84

model="codestral-latest",

85

prompt=prefix,

86

suffix=suffix,

87

max_tokens=150,

88

temperature=0.1 # Low temperature for code

89

)

90

91

print("Generated code:")

92

print(response.choices[0].text)

93

```

94

95

### Complete Code in Context

96

97

```python

98

# Complete a class method

99

prefix = """

100

class DataProcessor:

101

def __init__(self, data):

102

self.data = data

103

104

def process(self):

105

# TODO: implement data processing

106

"""

107

108

suffix = """

109

return processed_data

110

111

def save(self, filename):

112

with open(filename, 'w') as f:

113

json.dump(self.processed_data, f)

114

"""

115

116

response = client.fim.complete(

117

model="codestral-latest",

118

prompt=prefix,

119

suffix=suffix,

120

max_tokens=200

121

)

122

123

complete_code = prefix + response.choices[0].text + suffix

124

print("Complete class:")

125

print(complete_code)

126

```

127

128

### Streaming Code Generation

129

130

```python

131

prefix = "def merge_sort(arr):\n if len(arr) <= 1:\n return arr\n "

132

suffix = "\n return merge(left_sorted, right_sorted)"

133

134

print("Generating code...")

135

print(prefix, end="")

136

137

stream = client.fim.stream(

138

model="codestral-latest",

139

prompt=prefix,

140

suffix=suffix,

141

max_tokens=100

142

)

143

144

for chunk in stream:

145

if chunk.choices[0].text:

146

print(chunk.choices[0].text, end="", flush=True)

147

148

print(suffix)

149

```

150

151

## Types

152

153

### Request Types

154

155

```python { .api }

156

class FIMCompletionRequest:

157

model: str

158

prompt: str

159

suffix: Optional[str]

160

temperature: Optional[float]

161

top_p: Optional[float]

162

max_tokens: Optional[int]

163

min_tokens: Optional[int]

164

stream: Optional[bool]

165

stop: Optional[Union[str, List[str]]]

166

random_seed: Optional[int]

167

168

class FIMCompletionStreamRequest:

169

model: str

170

prompt: str

171

suffix: Optional[str]

172

temperature: Optional[float]

173

top_p: Optional[float]

174

max_tokens: Optional[int]

175

min_tokens: Optional[int]

176

stop: Optional[Union[str, List[str]]]

177

random_seed: Optional[int]

178

```

179

180

### Response Types

181

182

```python { .api }

183

class FIMCompletionResponse:

184

id: str

185

object: str

186

created: int

187

model: str

188

choices: List[FIMCompletionChoice]

189

usage: Optional[UsageInfo]

190

191

class FIMCompletionChoice:

192

index: int

193

text: str

194

finish_reason: Optional[str]

195

```

196

197

## Best Practices

198

199

### Effective FIM Usage

200

201

- **Clear Context**: Provide meaningful prefix and suffix for better completions

202

- **Proper Indentation**: Maintain consistent indentation in prefix and suffix

203

- **Language Hints**: Include language-specific syntax cues in the context

204

- **Reasonable Scope**: Keep the completion task focused and bounded

205

206

### Model Selection

207

208

- **codestral-latest**: Primary model for code completion tasks

209

- Optimized for multiple programming languages

210

- Understands code structure and syntax patterns

211

212

### Temperature Guidelines

213

214

- **0.0-0.2**: Deterministic, focused completions for production code

215

- **0.2-0.5**: Balanced creativity and correctness for exploration

216

- **0.5+**: More creative but potentially less accurate completions