Build LangChain agents with tool calling for autonomous task execution. Use when creating AI agents, implementing tool/function calling, binding tools to models, or building autonomous multi-step workflows. Trigger: "langchain agents", "langchain tools", "tool calling", "create agent", "function calling", "createToolCallingAgent".
80
77%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/saas-packs/langchain-pack/skills/langchain-core-workflow-b/SKILL.mdBuild autonomous agents that use tools, make decisions, and execute multi-step tasks. Covers tool definition with Zod schemas, createToolCallingAgent, AgentExecutor, streaming agent output, and conversation memory.
langchain-core-workflow-a (chains)npm install langchain @langchain/core @langchain/openai zodimport { tool } from "@langchain/core/tools";
import { z } from "zod";
// Tool with Zod schema validation
const calculator = tool(
async ({ expression }) => {
try {
// Use a safe math parser in production (e.g., mathjs)
const result = Function(`"use strict"; return (${expression})`)();
return String(result);
} catch (e) {
return `Error: invalid expression "${expression}"`;
}
},
{
name: "calculator",
description: "Evaluate a mathematical expression. Input: a math expression string.",
schema: z.object({
expression: z.string().describe("Math expression like '2 + 2' or '100 * 0.15'"),
}),
}
);
const weatherLookup = tool(
async ({ city }) => {
// Replace with real API call
const data: Record<string, string> = {
"New York": "72F, sunny",
"London": "58F, cloudy",
"Tokyo": "80F, humid",
};
return data[city] ?? `No weather data for ${city}`;
},
{
name: "weather",
description: "Get current weather for a city.",
schema: z.object({
city: z.string().describe("City name"),
}),
}
);
const tools = [calculator, weatherLookup];import { ChatOpenAI } from "@langchain/openai";
import { createToolCallingAgent, AgentExecutor } from "langchain/agents";
import { ChatPromptTemplate, MessagesPlaceholder } from "@langchain/core/prompts";
const llm = new ChatOpenAI({ model: "gpt-4o-mini" });
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant. Use tools when needed."],
new MessagesPlaceholder("chat_history"),
["human", "{input}"],
new MessagesPlaceholder("agent_scratchpad"),
]);
const agent = createToolCallingAgent({
llm,
tools,
prompt,
});
const executor = new AgentExecutor({
agent,
tools,
verbose: true, // Log reasoning steps
maxIterations: 10, // Prevent infinite loops
returnIntermediateSteps: true,
});// Simple invocation
const result = await executor.invoke({
input: "What's 25 * 4, and what's the weather in Tokyo?",
chat_history: [],
});
console.log(result.output);
// "25 * 4 = 100. The weather in Tokyo is 80F and humid."
// The agent decided to call both tools, then composed the answer.
console.log(result.intermediateSteps);
// Shows each tool call and its resultimport { ChatMessageHistory } from "@langchain/community/stores/message/in_memory";
import { RunnableWithMessageHistory } from "@langchain/core/runnables";
const messageHistory = new ChatMessageHistory();
const agentWithHistory = new RunnableWithMessageHistory({
runnable: executor,
getMessageHistory: (_sessionId) => messageHistory,
inputMessagesKey: "input",
historyMessagesKey: "chat_history",
});
// First call
await agentWithHistory.invoke(
{ input: "My name is Alice" },
{ configurable: { sessionId: "user-1" } }
);
// Second call -- agent remembers
const res = await agentWithHistory.invoke(
{ input: "What's my name?" },
{ configurable: { sessionId: "user-1" } }
);
console.log(res.output); // "Your name is Alice!"const eventStream = executor.streamEvents(
{ input: "Calculate 15% tip on $85", chat_history: [] },
{ version: "v2" }
);
for await (const event of eventStream) {
if (event.event === "on_chat_model_stream") {
process.stdout.write(event.data.chunk.content ?? "");
} else if (event.event === "on_tool_start") {
console.log(`\n[Calling tool: ${event.name}]`);
} else if (event.event === "on_tool_end") {
console.log(`[Tool result: ${event.data.output}]`);
}
}import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage } from "@langchain/core/messages";
const model = new ChatOpenAI({ model: "gpt-4o-mini" });
const modelWithTools = model.bindTools(tools);
const response = await modelWithTools.invoke([
new HumanMessage("What's 42 * 17?"),
]);
// Check if model wants to call a tool
if (response.tool_calls && response.tool_calls.length > 0) {
for (const tc of response.tool_calls) {
console.log(`Tool: ${tc.name}, Args: ${JSON.stringify(tc.args)}`);
// Execute tool manually
const toolResult = await tools
.find((t) => t.name === tc.name)!
.invoke(tc.args);
console.log(`Result: ${toolResult}`);
}
}from langchain_openai import ChatOpenAI
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.tools import tool
@tool
def calculator(expression: str) -> str:
"""Evaluate a math expression."""
return str(eval(expression))
tools = [calculator]
llm = ChatOpenAI(model="gpt-4o-mini")
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
MessagesPlaceholder("chat_history", optional=True),
("human", "{input}"),
MessagesPlaceholder("agent_scratchpad"),
])
agent = create_tool_calling_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
result = executor.invoke({"input": "What is 25 * 4?", "chat_history": []})| Error | Cause | Fix |
|---|---|---|
Max iterations reached | Agent stuck in loop | Increase maxIterations or improve system prompt |
Tool not found | Tool name mismatch | Verify tools array passed to both createToolCallingAgent and AgentExecutor |
Missing agent_scratchpad | Prompt missing placeholder | Add new MessagesPlaceholder("agent_scratchpad") |
| Tool execution error | Tool throws exception | Wrap tool body in try/catch, return error string |
Proceed to langchain-common-errors for debugging guidance.
3e83543
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.