ChiragPatankar/AI-Customer-Support-Bot--MCP-Server
If you are the rightful owner of AI-Customer-Support-Bot--MCP-Server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
A Model Context Protocol (MCP) server that provides AI-powered customer support using Cursor AI and Glama.ai integration.
๐ค AI Customer Support Bot - MCP Server
A modern, extensible MCP server framework for building AI-powered customer support systems
Features โข Quick Start โข API Reference โข Architecture โข Contributing
๐ Overview
A Model Context Protocol (MCP) compliant server framework built with modern Python. Designed for developers who want to create intelligent customer support systems without vendor lock-in. Clean architecture, battle-tested patterns, and ready for any AI provider.
graph TB
Client[HTTP Client] --> API[API Server]
API --> MW[Middleware Layer]
MW --> SVC[Service Layer]
SVC --> CTX[Context Manager]
SVC --> AI[AI Integration]
SVC --> DAL[Data Access Layer]
DAL --> DB[(PostgreSQL)]
โจ Features
๐๏ธ Clean Architecture ๐ก MCP Compliant |
๐ Production Ready ๐ High Performance |
๐ AI Agnostic ๐ Health Monitoring |
๐ก๏ธ Secure by Default ๐ฆ Batch Processing |
๐ Quick Start
Prerequisites
- Python 3.8+
- PostgreSQL
- Your favorite AI service (OpenAI, Anthropic, etc.)
Installation
# Clone and setup
git clone https://github.com/ChiragPatankar/AI-Customer-Support-Bot--MCP-Server.git
cd AI-Customer-Support-Bot--MCP-Server
# Create virtual environment
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Setup environment
cp .env.example .env
# Edit .env with your configuration
Configuration
# .env file
DATABASE_URL=postgresql://user:password@localhost/customer_support_bot
SECRET_KEY=your-super-secret-key
RATE_LIMIT_REQUESTS=100
RATE_LIMIT_PERIOD=60
Run
# Setup database
createdb customer_support_bot
# Start server
python app.py
# ๐ Server running at http://localhost:8000
๐ก API Reference
Core Endpoints
Health Check
GET /mcp/health
Process Single Query
POST /mcp/process
Content-Type: application/json
X-MCP-Auth: your-token
X-MCP-Version: 1.0
{
"query": "How do I reset my password?",
"priority": "high"
}
Batch Processing
POST /mcp/batch
Content-Type: application/json
X-MCP-Auth: your-token
{
"queries": [
"How do I reset my password?",
"What are your business hours?"
]
}
Response Format
Success Response
{
"status": "success",
"data": {
"response": "Generated AI response",
"confidence": 0.95,
"processing_time": "120ms"
},
"meta": {
"request_id": "req_123456",
"timestamp": "2024-02-14T12:00:00Z"
}
}
Error Response
{
"code": "RATE_LIMIT_EXCEEDED",
"message": "Rate limit exceeded",
"details": {
"retry_after": 60,
"timestamp": "2024-02-14T12:00:00Z"
}
}
๐๏ธ Architecture
Project Structure
๐ฆ AI-Customer-Support-Bot--MCP-Server
โโโ ๐ app.py # FastAPI application
โโโ ๐๏ธ database.py # Database configuration
โโโ ๐ก๏ธ middleware.py # Auth & rate limiting
โโโ ๐ models.py # ORM models
โโโ โ๏ธ mcp_config.py # MCP protocol config
โโโ ๐ requirements.txt # Dependencies
โโโ ๐ .env.example # Environment template
Layer Responsibilities
Layer | Purpose | Components |
---|---|---|
API | HTTP endpoints, validation | FastAPI routes, Pydantic models |
Middleware | Auth, rate limiting, logging | Token validation, request throttling |
Service | Business logic, AI integration | Context management, AI orchestration |
Data | Persistence, models | PostgreSQL, SQLAlchemy ORM |
๐ Extending with AI Services
Add Your AI Provider
- Install your AI SDK:
pip install openai # or anthropic, cohere, etc.
- Configure environment:
# Add to .env
AI_SERVICE_API_KEY=sk-your-api-key
AI_SERVICE_MODEL=gpt-4
- Implement service integration:
# In service layer
class AIService:
async def generate_response(self, query: str, context: dict) -> str:
# Your AI integration here
return ai_response
๐ง Development
Running Tests
pytest tests/
Code Quality
# Format code
black .
# Lint
flake8
# Type checking
mypy .
Docker Support
# Coming soon - Docker containerization
๐ Monitoring & Observability
Health Metrics
- โ Service uptime
- ๐ Database connectivity
- ๐ Request rates
- โฑ๏ธ Response times
- ๐พ Memory usage
Logging
# Structured logging included
{
"timestamp": "2024-02-14T12:00:00Z",
"level": "INFO",
"message": "Query processed",
"request_id": "req_123456",
"processing_time": 120
}
๐ Security
Built-in Security Features
- ๐ Token Authentication - Secure API access
- ๐ก๏ธ Rate Limiting - DoS protection
- โ Input Validation - SQL injection prevention
- ๐ Audit Logging - Request tracking
- ๐ Environment Secrets - Secure config management
๐ Deployment
Environment Setup
# Production environment variables
DATABASE_URL=postgresql://prod-user:password@prod-host/db
RATE_LIMIT_REQUESTS=1000
LOG_LEVEL=WARNING
Scaling Considerations
- Use connection pooling for database
- Implement Redis for rate limiting in multi-instance setups
- Add load balancer for high availability
- Monitor with Prometheus/Grafana
๐ค Contributing
We love contributions! Here's how to get started:
Development Setup
# Fork the repo, then:
git clone https://github.com/your-username/AI-Customer-Support-Bot--MCP-Server.git
cd AI-Customer-Support-Bot--MCP-Server
# Create feature branch
git checkout -b feature/amazing-feature
# Make your changes
# ...
# Test your changes
pytest
# Submit PR
Contribution Guidelines
- ๐ Write tests for new features
- ๐ Update documentation
- ๐จ Follow existing code style
- โ Ensure CI passes
๐ License
This project is licensed under the MIT License - see the file for details.
Built with โค๏ธ by Chirag Patankar
โญ Star this repo if you find it helpful! โญ
Report Bug โข Request Feature โข Documentation