itsDurvank/Mcp_server
If you are the rightful owner of Mcp_server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
The MCP Agentic AI Server Project is a comprehensive implementation of the Model Context Protocol, featuring dual AI server architecture, real-time monitoring, and an interactive dashboard.
🚀 MCP Agentic AI Server Project
A comprehensive Model Context Protocol (MCP) implementation featuring dual AI server architecture, real-time monitoring, and an interactive dashboard.
🌟 Project Overview
This project demonstrates a production-ready MCP (Model Context Protocol) Agentic AI Server system with:
- 🔧 Custom MCP Server - Task-based AI processing with tool integration
- 🌐 Public MCP Server - Direct AI query processing
- 🎨 Interactive Dashboard - Real-time monitoring and user interface
- 📊 Live Statistics - Performance metrics and analytics
- 🛠️ Extensible Tools - Modular tool framework for custom functionality
🏗️ Architecture
┌─────────────────────────────────────────────────────────────────┐
│ 🎨 Streamlit Dashboard │
│ (Port 8501) │
└─────────────────────┬───────────────────────────────────────────┘
│
┌─────────────┴───────────────┐
▼ ▼
┌───────────────────┐ ┌──────────────────┐
│ 🔧 Custom MCP │ │ 🌐 Public MCP │
│ (Port 8000) │ │ (Port 8001) │
│ │ │ │
│ • Task Creation │ │ • Direct Queries │
│ • Tool Integration│ │ • Simple AI Chat │
│ • Async Processing│ │ • Real-time Stats│
└───────────────────┘ └──────────────────┘
│ │
└─────────────┬─────────────┘
▼
┌─────────────────┐
│ 🧠 Google │
│ Gemini API │
└─────────────────┘
🚀 Quick Start
Prerequisites
- Python 3.12+ (Conda environment recommended)
- Google Gemini API Key (Get one here)
- Git for cloning the repository
1. Clone & Setup
# Clone the repository
git clone <repository-url>
cd mcp_server_project
# Create and activate virtual environment (recommended)
conda create -n mcp_env python=3.12
conda activate mcp_env
# Install dependencies
pip install -r requirements.txt
2. Environment Configuration
Create a .env file in the project root:
GEMINI_API_KEY=your_gemini_api_key_here
3. Run the Application
Open 4 terminals and run the following commands:
Terminal 1: Custom MCP Server 🔧
cd mcp-agentic-ai
python -m custom_mcp.server
Server will start on http://localhost:8000
Terminal 2: Public MCP Server 🌐
cd mcp-agentic-ai
python -m public_mcp.server_public
Server will start on http://localhost:8001
Terminal 3: Streamlit Dashboard 🎨
cd mcp-agentic-ai/streamlit_demo
streamlit run app.py
Dashboard will open at http://localhost:8501
Terminal 4: Test the APIs 🧪
# Test Custom MCP Server
curl -X POST http://localhost:8000/task \
-H "Content-Type: application/json" \
-d '{"input":"Hello World","tools":["sample_tool"]}'
# Test Public MCP Server
curl -X POST http://localhost:8001/ask \
-H "Content-Type: application/json" \
-d '{"query":"What is artificial intelligence?"}'
🎯 Features
🔧 Custom MCP Server Features
- Asynchronous Task Processing - Create tasks with unique IDs
- Tool Integration Framework - Extensible tool system
- Performance Monitoring - Real-time statistics tracking
- Error Handling - Robust error management and logging
🌐 Public MCP Server Features
- Direct AI Queries - Instant responses from Gemini
- Simple API - Easy-to-use REST endpoints
- Statistics Tracking - Performance metrics and analytics
- High Availability - Designed for concurrent requests
🎨 Dashboard Features
- Modern UI Design - Glassmorphism effects and animations
- Real-time Updates - Live statistics and performance metrics
- Responsive Design - Mobile-friendly interface
- Interactive Forms - Easy server selection and input handling
📊 API Documentation
Custom MCP Server (Port 8000)
Create Task
POST /task
Content-Type: application/json
{
"input": "Your task description",
"tools": ["sample_tool"]
}
Response: {"task_id": "uuid-string"}
Execute Task
POST /task/{task_id}/run
Response: {
"task_id": "uuid-string",
"output": "AI generated response"
}
Get Statistics
GET /stats
Response: {
"queries_processed": 42,
"response_time": 1.23,
"success_rate": 95.5,
"uptime": 120.5
}
Public MCP Server (Port 8001)
Ask Question
POST /ask
Content-Type: application/json
{
"query": "Your question here"
}
Response: {"response": "AI generated answer"}
Get Statistics
GET /stats
Response: {
"queries_processed": 15,
"response_time": 0.89,
"success_rate": 100.0,
"todays_queries": 15
}
🛠️ Project Structure
mcp_server_project/
├── 📄 README.md # This file
├── 📄 requirements.txt # Python dependencies
├── 📄 .env # Environment variables
│
├── 📁 mcp-agentic-ai/ # Main application
│ ├── 📁 custom_mcp/ # Custom MCP server
│ │ ├── 📄 server.py # Flask server (Port 8000)
│ │ ├── 📄 mcp_controller.py # Business logic
│ │ └── 📁 tools/ # Custom tools
│ │ └── 📄 sample_tool.py # Example tool
│ │
│ ├── 📁 public_mcp/ # Public MCP server
│ │ ├── 📄 server_public.py # Flask server (Port 8001)
│ │ └── 📄 agent_config.yaml # AI configuration
│ │
│ └── 📁 streamlit_demo/ # Interactive dashboard
│ └── 📄 app.py # Streamlit app (Port 8501)
│
└── 📁 documentation/ # Comprehensive docs
├── 📄 documentation.md # Main documentation
├── 📄 workflows.md # Mermaid workflows
├── 📄 designs.md # Architecture diagrams
└── 📄 tech-stack.md # Technology details
🔧 Development
Adding Custom Tools
- Create a new tool file in
mcp-agentic-ai/custom_mcp/tools/:
# my_custom_tool.py
import logging
def my_custom_tool(text: str) -> str:
"""
Your custom tool implementation
"""
logging.info(f"Processing: {text}")
# Your logic here
result = text.upper() # Example transformation
return result
- Import and use in
mcp_controller.py:
from custom_mcp.tools.my_custom_tool import my_custom_tool
# Add to the run method
if "my_custom_tool" in task["tools"]:
text = my_custom_tool(text)
Extending the Dashboard
The Streamlit dashboard can be customized by modifying streamlit_demo/app.py:
- Add new UI components
- Implement additional statistics
- Create new visualizations
- Add export functionality
📚 Documentation
Comprehensive documentation is available in the documentation/ folder:
- 📄 - Complete project guide (1500+ lines)
- 📄 - Mermaid workflow diagrams
- 📄 - System design diagrams
- 📄 - Technology details
🎓 Learning Outcomes
By completing this project, you'll learn:
- 🤖 AI Integration - Google Gemini API, prompt engineering
- 🔧 Backend Development - Flask, REST APIs, microservices
- 🎨 Frontend Development - Streamlit, modern CSS, responsive design
- 📊 System Monitoring - Real-time statistics, performance tracking
- 🏗️ Architecture Design - Microservices, event-driven patterns
- 🔐 Security Practices - API security, environment management
🚀 Deployment
Local Development
Follow the Quick Start guide above.
Production Deployment
For production deployment, consider:
- 🐳 Docker - Containerize each service
- ☸️ Kubernetes - Orchestrate containers
- 🔒 HTTPS - SSL/TLS certificates
- 📊 Monitoring - Prometheus, Grafana
- 🗄️ Database - PostgreSQL, Redis
🤝 Contributing
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
📝 License
This project is licensed under the MIT License - see the file for details.
🆘 Support
- 📚 Documentation - Check the comprehensive docs in
/documentation/ - 🐛 Issues - Report bugs via GitHub Issues
- 💬 Discussions - Join GitHub Discussions for questions
- 📧 Contact - Reach out for additional support
🌟 Acknowledgments
- Google Gemini - For providing excellent AI capabilities
- Streamlit - For the amazing dashboard framework
- Flask - For the robust web framework
- Python Community - For the incredible ecosystem
🎯 Next Steps
- 🚀 Run the Application - Follow the Quick Start guide
- 📚 Read Documentation - Explore the comprehensive docs
- 🔧 Customize Tools - Add your own custom tools
- 🎨 Enhance UI - Improve the dashboard design
- 📊 Add Features - Implement new functionality
- 🚀 Deploy - Take it to production
Ready to build the future of AI? Let's get started! 🚀
Built with ❤️ for the AI community. Star ⭐ this repo if you find it helpful!