or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

agent-creation.mdhuman-in-the-loop.mdindex.mdstate-store-injection.mdtool-execution.mdtool-validation.md

agent-creation.mddocs/

0

# Agent Creation

1

2

ReAct-style agent creation functionality for building tool-calling agents that follow the Reasoning and Acting pattern with LangGraph workflows.

3

4

## Capabilities

5

6

### Create ReAct Agent

7

8

Creates an agent graph that calls tools in a loop until a stopping condition is met. Supports both static and dynamic model selection, customizable prompts, structured response generation, and advanced control flow hooks.

9

10

```python { .api }

11

def create_react_agent(

12

model: Union[

13

str,

14

LanguageModelLike,

15

Callable[[StateSchema, Runtime[ContextT]], BaseChatModel],

16

Callable[[StateSchema, Runtime[ContextT]], Awaitable[BaseChatModel]],

17

Callable[

18

[StateSchema, Runtime[ContextT]], Runnable[LanguageModelInput, BaseMessage]

19

],

20

Callable[

21

[StateSchema, Runtime[ContextT]],

22

Awaitable[Runnable[LanguageModelInput, BaseMessage]],

23

],

24

],

25

tools: Union[Sequence[Union[BaseTool, Callable, dict[str, Any]]], ToolNode],

26

*,

27

prompt: Optional[Prompt] = None,

28

response_format: Optional[

29

Union[StructuredResponseSchema, tuple[str, StructuredResponseSchema]]

30

] = None,

31

pre_model_hook: Optional[RunnableLike] = None,

32

post_model_hook: Optional[RunnableLike] = None,

33

state_schema: Optional[StateSchemaType] = None,

34

context_schema: Optional[Type[Any]] = None,

35

checkpointer: Optional[Checkpointer] = None,

36

store: Optional[BaseStore] = None,

37

interrupt_before: Optional[list[str]] = None,

38

interrupt_after: Optional[list[str]] = None,

39

debug: bool = False,

40

version: Literal["v1", "v2"] = "v2",

41

name: Optional[str] = None,

42

**deprecated_kwargs: Any,

43

) -> CompiledStateGraph

44

```

45

46

**Parameters:**

47

- `model`: The language model for the agent. Can be a string identifier, model instance, or callable for dynamic model selection

48

- `tools`: List of tools or ToolNode instance available to the agent

49

- `prompt`: Optional prompt (string, SystemMessage, callable, or Runnable)

50

- `response_format`: Schema for structured final output

51

- `pre_model_hook`: Node to execute before the agent (for message management)

52

- `post_model_hook`: Node to execute after the agent (for validation/approval)

53

- `state_schema`: Custom state schema (defaults to AgentState)

54

- `context_schema`: Schema for runtime context

55

- `checkpointer`: Checkpoint saver for state persistence

56

- `store`: Store object for cross-session persistence

57

- `interrupt_before`/`interrupt_after`: Node names to interrupt at

58

- `debug`: Enable debug mode

59

- `version`: Graph version ("v1" or "v2")

60

- `name`: Name for the compiled graph

61

62

**Returns:** CompiledStateGraph ready for execution

63

64

**Usage Examples:**

65

66

```python

67

from langchain_anthropic import ChatAnthropic

68

from langchain_core.tools import tool

69

from langgraph.prebuilt import create_react_agent

70

71

# Basic agent with string model identifier

72

@tool

73

def search_web(query: str) -> str:

74

"""Search the web for information."""

75

# Implementation here

76

return f"Search results for: {query}"

77

78

agent = create_react_agent(

79

"anthropic:claude-3-7-sonnet-latest",

80

[search_web],

81

prompt="You are a helpful research assistant."

82

)

83

84

# Agent with model instance and structured output

85

from pydantic import BaseModel

86

87

class SearchResult(BaseModel):

88

query: str

89

summary: str

90

confidence: float

91

92

agent_with_structured_output = create_react_agent(

93

ChatAnthropic(model="claude-3-7-sonnet-latest"),

94

[search_web],

95

response_format=SearchResult,

96

prompt="Provide structured search results."

97

)

98

99

# Dynamic model selection

100

def select_model(state: AgentState, runtime: Runtime) -> ChatAnthropic:

101

# Choose model based on state or context

102

model_name = runtime.context.get("model_name", "claude-3-7-sonnet-latest")

103

model = ChatAnthropic(model=model_name)

104

return model.bind_tools([search_web])

105

106

dynamic_agent = create_react_agent(

107

select_model,

108

[search_web],

109

context_schema=dict # Context schema for runtime

110

)

111

```

112

113

### Agent State Classes

114

115

State management classes for different agent configurations.

116

117

#### Basic Agent State

118

119

```python { .api }

120

class AgentState(TypedDict):

121

"""The state of the agent."""

122

messages: Annotated[Sequence[BaseMessage], add_messages]

123

remaining_steps: NotRequired[RemainingSteps]

124

```

125

126

#### Pydantic Agent State

127

128

```python { .api }

129

class AgentStatePydantic(BaseModel):

130

"""The state of the agent."""

131

messages: Annotated[Sequence[BaseMessage], add_messages]

132

remaining_steps: RemainingSteps = 25

133

```

134

135

#### Agent State with Structured Response

136

137

```python { .api }

138

class AgentStateWithStructuredResponse(AgentState):

139

"""The state of the agent with a structured response."""

140

structured_response: StructuredResponse

141

142

class AgentStateWithStructuredResponsePydantic(AgentStatePydantic):

143

"""The state of the agent with a structured response."""

144

structured_response: StructuredResponse

145

```

146

147

## Advanced Features

148

149

### Pre and Post Model Hooks

150

151

Pre-model hooks run before each LLM call and are useful for message history management:

152

153

```python

154

def trim_messages(state):

155

"""Keep only the last 10 messages to manage context length."""

156

messages = state["messages"]

157

if len(messages) > 10:

158

return {

159

"messages": [RemoveMessage(id=REMOVE_ALL_MESSAGES)] + messages[-10:]

160

}

161

return {}

162

163

agent = create_react_agent(

164

model,

165

tools,

166

pre_model_hook=trim_messages

167

)

168

```

169

170

Post-model hooks run after each LLM call for validation or human approval:

171

172

```python

173

def approval_hook(state):

174

"""Pause for human approval on sensitive operations."""

175

last_message = state["messages"][-1]

176

if hasattr(last_message, 'tool_calls') and last_message.tool_calls:

177

# Check if any tool call requires approval

178

for tool_call in last_message.tool_calls:

179

if tool_call["name"] in ["delete_file", "send_email"]:

180

# Trigger human approval workflow

181

return interrupt_for_approval(tool_call)

182

return {}

183

184

agent = create_react_agent(

185

model,

186

tools,

187

post_model_hook=approval_hook

188

)

189

```

190

191

### Interrupts and Checkpointing

192

193

```python

194

from langgraph.checkpoint.sqlite import SqliteSaver

195

from langgraph.store.memory import InMemoryStore

196

197

# Agent with persistence and interrupts

198

checkpointer = SqliteSaver.from_conn_string(":memory:")

199

store = InMemoryStore()

200

201

agent = create_react_agent(

202

model,

203

tools,

204

checkpointer=checkpointer,

205

store=store,

206

interrupt_before=["tools"], # Pause before tool execution

207

interrupt_after=["agent"] # Pause after LLM response

208

)

209

210

# Execute with thread management

211

config = {"configurable": {"thread_id": "conversation-123"}}

212

result = agent.invoke({"messages": [("user", "Hello")]}, config)

213

```

214

215

## Error Handling

216

217

The agent automatically handles various error conditions:

218

219

- **Tool errors**: Handled by ToolNode based on `handle_tool_errors` configuration

220

- **Model errors**: Propagated to caller for handling

221

- **State validation**: Automatic validation of state schema requirements

222

- **Remaining steps**: Prevents infinite loops with configurable step limits

223

224

```python

225

# Configure tool error handling

226

from langgraph.prebuilt import ToolNode

227

228

def handle_api_errors(e: Exception) -> str:

229

if "rate_limit" in str(e).lower():

230

return "API rate limit exceeded. Please try again later."

231

return f"Tool error: {str(e)}"

232

233

tool_node = ToolNode([search_web], handle_tool_errors=handle_api_errors)

234

agent = create_react_agent(model, tool_node)

235

```