Skip to main content
This document covers the streamlined prompt flow through the system using LangGraph for intelligent backend routing:
  • From frontend trigger to Node.js LangGraph handlers
  • Single socket event for all operations
  • Backend-driven routing and decision making
  • Simplified frontend integration

Core Functions

handleSubmitPrompt(refine = false)

Triggers the complete prompt submission lifecycle through single socket event. Behavior:
  • First message: Creates new chat and adds user as member
  • All messages: Emits single socket event to backend
  • Disables chat input until response received
  • No frontend routing decisions required

enterNewPrompt()

Creates message entry in MongoDB before backend submission. Usage: Must be called before sending to backend

Payload Structure

Single payload format for all AI interactions:
{
  query: string,
  messageId: string,
  modelId: string,
  chatId: string,
  model_name: string,
  msgCredit: number,
  agentId?: string,        // Optional: if agent selected
  documentIds?: string[],  // Optional: if documents attached
  imageData?: string       // Optional: if image included
}

Backend Processing

Single Entry Point

All AI requests go through one unified endpoint: Socket Event: ai-query Processing Flow:
  1. Receive socket event with payload
  2. LangGraph analyzes request requirements
  3. Backend determines operation type:
    • Normal chat
    • Document-based query
    • Agent interaction
    • Agent + Document combination
    • Tool requirements (search, image)
  4. Execute appropriate handler
  5. Stream response back via socket

Backend Operation Functions

Unified Handler:
  • processAIQuery(payload) - Single entry point for all operations
Internal Routing (Backend Only):
  • Simple chat processing
  • Document retrieval and RAG
  • Agent prompt integration
  • Tool activation (SearxNG, DALL·E)
  • Vision processing for images
No Separate Frontend Endpoints:
  • Previous architecture had multiple client-side function calls
  • Current architecture: backend handles all routing via LangGraph

React Hooks

Core Hooks

  • useConversation() - Central conversation management with unified LLM streaming
  • useThunderBoltPopup() - File uploads, agent selection, prompt selection
  • useMediaUpload() - Media uploads for conversations

Data Hooks

  • useCustomGpt() - Fetch available custom GPT agents
  • usePrompt() - Retrieve saved prompts
  • useBrainDocs() - Load uploaded documents

Utility Hooks

  • useIntersectionObserver() - Infinite scroll for lists
  • useDebounce() - Debounce user input
  • useServerAction() - Server-side actions in Next.js

UI State Management

Unified State Updates

  • handleModelSelectionUrl() - Sync model state with URL
  • handleProAgentUrlState() - Sync Pro Agent state with URL
  • handleNewChatClick() - Start new chat session

Backend Intelligence

LangGraph Decision Making

Backend automatically handles: Operation Detection:
  • Query analysis for intent
  • Document requirement identification
  • Agent configuration loading
  • Tool necessity determination
Context Assembly:
  • Chat history retrieval
  • Document vector search (if needed)
  • Agent prompt integration (if selected)
  • Tool setup (if required)
Response Generation:
  • Single or dual LLM call based on tools
  • Streaming response to frontend
  • Cost tracking and logging

UI Components

Lists and Modals

  • CommonList - Combined private/shared document list
  • RenderModalList - Model selection dropdown
  • AgentSelector - Agent selection interface (no routing logic needed)

Parameters

All operations use the same parameter set:
{
  query: string,
  messageId: string,
  modelId: string,
  chatId: string,
  model_name: string,
  msgCredit: number,
  agentId?: string,
  documentIds?: string[],
  imageData?: string
}

Backend-Only Parameters

These are handled internally by backend:
  • Tool selection decisions
  • RAG vs Simple Chat routing
  • Context assembly strategies
  • LLM call optimization

Socket Communication

Frontend Socket Events

Emit Events:
  • ai-query - Single event for all operations
Listen Events:
  • ai-response-stream - Streamed response chunks
  • ai-response-complete - Final response signal
  • ai-error - Error notifications

Backend Socket Handling

Event Processing:
  1. Receive ai-query event
  2. LangGraph processes request
  3. Stream response via ai-response-stream
  4. Emit ai-response-complete when done
  5. Handle errors via ai-error

Troubleshooting

Prompt Not Reaching Backend

Debug Steps:
  1. Verify handleSubmitPrompt() triggered
  2. Check socket connection status
  3. Confirm enterNewPrompt() executed
  4. Review browser console for errors
  5. Validate payload structure
Common Issues:
  • Socket disconnected
  • Payload validation failure
  • Network interruption
  • Backend service down

Response Not Displaying

Debug Steps:
  1. Check socket event listeners active
  2. Verify ai-response-stream handler
  3. Review UI update logic
  4. Test with simplified query
Common Issues:
  • Event listener not attached
  • UI state not updating
  • Response parsing error
  • Stream interruption

Agent or Document Issues

Debug Steps:
  1. Verify agent/document IDs in payload
  2. Check backend logs for retrieval
  3. Validate database connections
  4. Test with simple queries first
Common Issues:
  • Invalid agent/document ID
  • Database connection failure
  • Vector search timeout
  • Permission issues
I