Overview
Weam’s credit system tracks and calculates:- Prompt tokens: Input text sent to the AI model
- Completion tokens: AI-generated response text
- Total cost: Credits consumed per request
- Usage analytics: Stored per thread/session for reporting and billing
How Credit Calculation Works
Token-Based Pricing Model
Credits are calculated based on token usage:- Input tokens (your prompts): Lower cost per token
- Output tokens (AI responses): Higher cost per token
- Model complexity: Advanced models cost more credits per token
Real-Time Tracking
- Tokens counted for every AI interaction
- Credits calculated immediately after each response
- Usage stored by user, Brain, and conversation thread
- Available in Reports for monitoring and optimization
Technical Implementation
Core Components
Cost Calculator
Handles token counting and cost calculation for each LLM request. Key Functions:Function | Purpose |
---|---|
addPromptTokens(count) | Tracks input tokens from user prompts |
addCompletionTokens(count) | Tracks output tokens from AI responses |
calculateTotalCost(modelName) | Calculates total credits using model rates |
Callback Handler
Automatically processes and stores cost data after each AI interaction. Process Flow:- Collects prompt + completion tokens from the AI response
- Calls cost calculation function
- Creates comprehensive token usage data
- Stores data in MongoDB
Model Cost Mapping
Each AI provider has specific credit rates defined per 1,000 tokens. OpenAI Example:Data Persistence
Thread Repository
Stores detailed usage data for each conversation thread: Stored Data:- Thread/session identification
- Prompt & completion token counts
- Total credit cost per interaction
- Model metadata and timestamps
- User and Brain association
Credit Calculation Workflow
Step-by-Step Process
- User Interaction: User sends a prompt in any Brain
- Token Counting: System counts input tokens
- AI Processing: Request sent to selected AI model
- Response Analysis: Completion tokens counted
- Cost Calculation: Total credit usage computed
- Data Storage: Callback stores all usage data
- Persistence: MongoDB stores the data
- Reporting: Data becomes available in usage analytics
Multi-Provider Support
Provider-Specific Handlers
Each AI provider has dedicated cost calculation logic: OpenAI Handler:- Supports GPT-3.5, GPT-4, GPT-4o models
- Unified pricing for input/output tokens
- Supports Gemini 1.5 Flash, Pro models
- Separate pricing for input vs output tokens
- Supports Claude models
- Context-aware pricing for long conversations
- Supports open-source models
- Custom pricing configuration
Implementation Example
Monitoring Your Usage
Reports Dashboard
Access detailed usage analytics through Settings → Reports: Available Metrics:- Total credits consumed by user and time period
- Model usage breakdown showing which AI models are used most
- Token efficiency comparing input vs output token ratios
- Cost trends over time for budget planning
- Team usage patterns for optimization opportunities
Implementation Architecture
Cost Calculation Module
Location: Node.js backend cost calculation services Core Functions:Function | Purpose |
---|---|
calculateTokenCost() | Core token counting and cost calculation |
storeUsageData() | Database persistence for usage tracking |
getModelPricing() | Retrieve pricing for specific models |
trackUsageMetrics() | Real-time usage monitoring |
Extending to New Providers
To add support for new AI providers:- Define Cost Mapping: Set credit rates per 1K tokens for each model
- Implement Tracking: Add token counting logic
- Store Data: Ensure usage data is persisted to MongoDB
- Test Integration: Verify accurate token counting and cost calculation
Example Template
Understanding credits helps you optimize costs while maximizing the value of AI in your workflows.