Simple Chat uses LangGraph in Node.js for unified request handling with automatic tool selection and MCP integration support for Slack, GitHub, Google Drive, Gmail, and Google Calendar.
Processing Flow
1. Unified Request Handling
- Frontend emits single socket event with user query
- Node.js server receives request and invokes LangGraph
- LangGraph analyzes query requirements
- MultiServerClient fetches available MCP tools
- Backend determines optimal processing path
2. Intelligent Tool Management
- Available MCP Tools: Slack, GitHub, Google Drive, Google Calendar, Gmail
- Dynamic Filtering: Unnecessary tools excluded based on query analysis
- Automatic Activation: LangGraph decides tool usage without frontend input
- Context Assembly: Tools and prompts combined before LLM invocation
3. LangGraph State Management
Execution Nodes:- ToolNode: Executes required tools when detected
- ChatbotNode: Handles standard conversational responses
- Query analyzed for tool requirements
- Appropriate node selected automatically
- Context (query + history + tools) assembled
- Response generated and streamed
4. Context Initialization
- Chat repository initialized with session ID
- Historical messages retrieved from database
- Conversation memory prepared
- Combined context ready for processing
5. Execution Logic
LangGraph intelligently routes to appropriate handler:- Tool Required → ToolNode executes → Response generated
- Standard Query → ChatbotNode responds directly
- Combined Context → History + Current query processed together
Tool Activation Rules
Tool | Activation Condition |
---|---|
Web Search (SearxNG) | All models except GPT-4o latest, DeepSeek, Qwen |
Image Generation | OpenAI models only |
MCP Tools | Dynamically filtered per query |
Response Processing
Flow Options
Without Tool Invocation:- Query received
- LangGraph routes to ChatbotNode
- LLM generates response
- Response streamed to frontend
- Total LLM Calls: 1
- Query received
- LangGraph detects tool requirement
- ToolNode executes appropriate tool
- Tool output sent to LLM for processing
- LLM generates final response
- Response streamed to frontend
- Total LLM Calls: 2 (tool execution + final response)
Cost and Logging
MongoDB Handler Logging
Tracked Metrics:- Final response content
- Token usage (input + output)
- Tool activation records
- Processing time
- Cost breakdown per call
- Input token count measured
- Output token count tracked
- Cost calculated via Cost Callback
- Total breakdown stored in database
Architecture

Simple Chat Architecture Flow
Key Components
Component | Purpose |
---|---|
LangGraph Router | Intelligent request analysis and routing |
ToolNode | Tool execution and result handling |
ChatbotNode | Standard conversational responses |
MultiServerClient | MCP tool discovery and management |
Cost Tracker | Token usage and pricing tracking |
MongoDB Handler | Response and metrics persistence |
Web Search Integration
SearxNG Implementation
Independence from OpenAI:- Self-hosted metasearch engine
- No dependency on OpenAI search features
- Works across multiple model providers
- Complete control over search behavior
- Supported: GPT-4, GPT-3.5, Claude, Gemini, most models
- Not Supported: GPT-4o latest, DeepSeek, Qwen
Backend Intelligence
Automatic Detection
LangGraph backend automatically identifies:- Search Queries: Activates SearxNG when needed
- Image Requests: Triggers DALL·E for generation
- MCP Tools: Selects appropriate integration tools
- Context Requirements: Assembles chat history and user data
Troubleshooting
Tool Not Triggering
Potential Issues:- Model doesn’t support requested feature
- MCP tools not properly configured
- Tool filtering logic excluding required tool
- LangGraph not detecting tool requirement
- Verify model supports feature (check compatibility matrix)
- Confirm MultiServerClient pulling latest MCP tools
- Review LangGraph analysis logs
- Check tool filtering configuration