Skip to main content
Streamlined chat pipeline using LangGraph for intelligent tool routing and response generation in Node.js backend.
Simple Chat uses LangGraph in Node.js for unified request handling with automatic tool selection and MCP integration support for Slack, GitHub, Google Drive, Gmail, and Google Calendar.

Processing Flow

1. Unified Request Handling

  1. Frontend emits single socket event with user query
  2. Node.js server receives request and invokes LangGraph
  3. LangGraph analyzes query requirements
  4. MultiServerClient fetches available MCP tools
  5. Backend determines optimal processing path

2. Intelligent Tool Management

  • Available MCP Tools: Slack, GitHub, Google Drive, Google Calendar, Gmail
  • Dynamic Filtering: Unnecessary tools excluded based on query analysis
  • Automatic Activation: LangGraph decides tool usage without frontend input
  • Context Assembly: Tools and prompts combined before LLM invocation

3. LangGraph State Management

Execution Nodes:
  • ToolNode: Executes required tools when detected
  • ChatbotNode: Handles standard conversational responses
Flow Determination:
  • Query analyzed for tool requirements
  • Appropriate node selected automatically
  • Context (query + history + tools) assembled
  • Response generated and streamed

4. Context Initialization

  • Chat repository initialized with session ID
  • Historical messages retrieved from database
  • Conversation memory prepared
  • Combined context ready for processing

5. Execution Logic

LangGraph intelligently routes to appropriate handler:
  • Tool Required → ToolNode executes → Response generated
  • Standard Query → ChatbotNode responds directly
  • Combined Context → History + Current query processed together

Tool Activation Rules

ToolActivation Condition
Web Search (SearxNG)All models except GPT-4o latest, DeepSeek, Qwen
Image GenerationOpenAI models only
MCP ToolsDynamically filtered per query

Response Processing

Flow Options

Without Tool Invocation:
  1. Query received
  2. LangGraph routes to ChatbotNode
  3. LLM generates response
  4. Response streamed to frontend
  5. Total LLM Calls: 1
With Tool Invocation:
  1. Query received
  2. LangGraph detects tool requirement
  3. ToolNode executes appropriate tool
  4. Tool output sent to LLM for processing
  5. LLM generates final response
  6. Response streamed to frontend
  7. Total LLM Calls: 2 (tool execution + final response)

Cost and Logging

MongoDB Handler Logging

Tracked Metrics:
  • Final response content
  • Token usage (input + output)
  • Tool activation records
  • Processing time
  • Cost breakdown per call
Cost Calculation:
  • Input token count measured
  • Output token count tracked
  • Cost calculated via Cost Callback
  • Total breakdown stored in database

Architecture

Simple Chat Architecture Diagram

Simple Chat Architecture Flow

Key Components

ComponentPurpose
LangGraph RouterIntelligent request analysis and routing
ToolNodeTool execution and result handling
ChatbotNodeStandard conversational responses
MultiServerClientMCP tool discovery and management
Cost TrackerToken usage and pricing tracking
MongoDB HandlerResponse and metrics persistence

Web Search Integration

SearxNG Implementation

Independence from OpenAI:
  • Self-hosted metasearch engine
  • No dependency on OpenAI search features
  • Works across multiple model providers
  • Complete control over search behavior
Model Support:
  • Supported: GPT-4, GPT-3.5, Claude, Gemini, most models
  • Not Supported: GPT-4o latest, DeepSeek, Qwen

Backend Intelligence

Automatic Detection

LangGraph backend automatically identifies:
  • Search Queries: Activates SearxNG when needed
  • Image Requests: Triggers DALL·E for generation
  • MCP Tools: Selects appropriate integration tools
  • Context Requirements: Assembles chat history and user data

Troubleshooting

Tool Not Triggering

Potential Issues:
  • Model doesn’t support requested feature
  • MCP tools not properly configured
  • Tool filtering logic excluding required tool
  • LangGraph not detecting tool requirement
Debug Steps:
  1. Verify model supports feature (check compatibility matrix)
  2. Confirm MultiServerClient pulling latest MCP tools
  3. Review LangGraph analysis logs
  4. Check tool filtering configuration
I