Run and test the TypeScript LangChain agent locally. Use when: (1) User wants to test locally, (2) User says 'run locally', 'test agent', 'start server', or 'dev mode', (3) Debugging issues.
90
87%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Start both agent and UI servers:
npm run devThis starts:
/invocations)/api/chat and React frontend)Or start individually:
# Terminal 1: Agent only
npm run dev:agent
# Terminal 2: UI only
npm run dev:uiServers will be available at:
http://localhost:5001/invocationshttp://localhost:3000http://localhost:3001/api/chat# Build first
npm run build
# Then start
npm startcurl -X POST http://localhost:5001/invocations \
-H "Content-Type: application/json" \
-d '{
"input": [
{"role": "user", "content": "What is the weather in San Francisco?"}
],
"stream": true
}'Expected response (Server-Sent Events):
data: {"type":"response.output_item.added","item":{"type":"message",...}}
data: {"type":"response.output_text.delta","delta":"The weather..."}
...
data: {"type":"response.completed"}
data: [DONE]Requires both servers running (npm run dev)
curl -X POST http://localhost:3001/api/chat \
-H "Content-Type: application/json" \
-d '{
"message": {
"role": "user",
"parts": [{"type": "text", "text": "Calculate 15 * 32"}]
},
"selectedChatModel": "chat-model"
}'Expected response (AI SDK format):
data: {"type":"text-delta","delta":"Let me calculate..."}
data: {"type":"tool-call",...}
...
data: [DONE]Open browser: http://localhost:3000
Should see chat interface with:
Make sure .env is configured (see quickstart skill):
# Required
DATABRICKS_HOST=https://your-workspace.cloud.databricks.com
DATABRICKS_TOKEN=dapi...
DATABRICKS_MODEL=databricks-claude-sonnet-4-5
MLFLOW_TRACKING_URI=databricks
MLFLOW_EXPERIMENT_ID=123
# Optional
PORT=8000
TEMPERATURE=0.1
MAX_TOKENS=2000
ENABLE_SQL_MCP=falseSee MLflow Tracing Guide for viewing traces in your workspace.
npm run dev uses tsx watch which:
Manual compilation:
npm run buildOutput in dist/ directory.
Add console.log() statements and view in terminal:
console.log("Tool invoked:", toolName);
console.log("Result:", result);For deeper debugging, use VS Code debugger:
.ts files# Weather tool
curl -X POST http://localhost:5001/invocations \
-H "Content-Type: application/json" \
-d '{"input": [{"role": "user", "content": "What is the weather in Tokyo?"}], "stream": false}'
# Calculator tool
curl -X POST http://localhost:5001/invocations \
-H "Content-Type: application/json" \
-d '{"input": [{"role": "user", "content": "Calculate 123 * 456"}], "stream": false}'
# Time tool
curl -X POST http://localhost:5001/invocations \
-H "Content-Type: application/json" \
-d '{"input": [{"role": "user", "content": "What time is it in London?"}], "stream": false}'MCP tools are configured in src/mcp-servers.ts. See add-tools skill for details.
Example test:
curl -X POST http://localhost:5001/invocations \
-H "Content-Type: application/json" \
-d '{"input": [{"role": "user", "content": "Query my database"}], "stream": false}'Pure tests with no dependencies:
npm run test:unitRuns tests/agent.test.ts - tests agent initialization, tool usage, multi-turn conversations.
Tests that need local servers running:
# Terminal 1: Start servers
npm run dev
# Terminal 2: Run tests
npm run test:integrationTests: /invocations, /api/chat, streaming, error handling.
Tests that need a deployed Databricks app:
# 1. Deploy app
npm run build
databricks bundle deploy --profile your-profile
databricks bundle run agent_langchain_ts --profile your-profile
# 2. Set APP_URL
export APP_URL=$(databricks apps get agent-lc-ts-dev --profile your-profile --output json | jq -r '.url')
# 3. Run E2E tests
npm run test:e2eSee tests/e2e/README.md for detailed setup instructions.
npm run test:allRuns unit + integration tests (not E2E).
See Troubleshooting Guide for common issues.
Port already in use:
lsof -ti:5001 | xargs kill -9 # Agent
lsof -ti:3001 | xargs kill -9 # UI backend
lsof -ti:3000 | xargs kill -9 # UI frontendAuthentication failed:
Verify credentials:
databricks auth profiles
databricks auth env --host
databricks auth env --tokenRe-run quickstart:
npm run quickstartInstall dependencies:
npm installCheck:
MLFLOW_EXPERIMENT_ID is set in .envdatabricks experiments get --experiment-id $MLFLOW_EXPERIMENT_IDCreate experiment if missing:
databricks experiments create \
--experiment-name "/Users/$(databricks current-user me --output json | jq -r .userName)/agent-langchain-ts"Check tool invocation in response intermediateSteps:
curl -s http://localhost:8000/api/chat \
-H "Content-Type: application/json" \
-d '{"messages": [{"role": "user", "content": "What is 2+2?"}]}' | jq '.intermediateSteps'Should show tool name and observation.
Monitor server logs for:
Add logging in src/server.ts:
console.log(`Request completed in ${duration}ms`);dfeb4ac
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.