Ujjwal5200/MCP-server-project-jenkins-
If you are the rightful owner of MCP-server-project-jenkins- and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
This project involves the implementation of a Model Context Protocol (MCP) server integrated with Jenkins for CI/CD automation, along with AWS integration.
NEXUS AI
A powerful AI assistant application built with Streamlit, LangGraph, and Google Gemini, featuring an MCP (Model Context Protocol) server that provides specialized tools for math operations and code generation. This project is built and managed using Jenkins with GitHub CI/CD for automated testing, building, and deployment.
🚀 Features
- Interactive Chat Interface: User-friendly Streamlit-based chat application for natural language interactions
- MCP Server Integration: Custom MCP server with specialized tools for enhanced AI capabilities
- Math Operations: Built-in tools for addition, subtraction, multiplication, and division
- Code Generation: Advanced Python and web development code generation powered by Google Gemini
- LangGraph Workflow: Orchestrates AI model calls and tool executions using LangGraph
- Google Gemini Integration: Leverages Gemini 2.5 Flash Lite for high-quality AI responses
- Asynchronous Processing: Efficient handling of concurrent operations using asyncio
- Customizable Themes: Dark and light themes with futuristic UI animations
- Docker Support: Containerized deployment for easy scalability
- CI/CD Pipeline: Automated build and deployment via Jenkins and GitHub Actions
- AWS Hosting: Deployed on Amazon Web Services for scalable and reliable cloud infrastructure
🏗️ Architecture
The MCP AI Assistant follows a modular, microservices-inspired architecture designed for scalability, maintainability, and seamless AI integration. The system employs a layered approach that separates concerns between user interface, business logic orchestration, and external service integrations.
System Overview
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ Streamlit UI │◄──►│ LangGraph │◄──►│ MCP Server │
│ (Frontend) │ │ Orchestrator │ │ (Tools) │
└─────────────────┘ └──────────────────┘ └─────────────────┘
│ │ │
└───────────────────────┼───────────────────────┘
▼
┌─────────────────────┐
│ Google Gemini AI │
│ (LLM Model) │
└─────────────────────┘
Core Components
1. Streamlit Frontend (app.py)
- Role: User interface and interaction management
- Features:
- Responsive chat interface with dark/light theme support
- Real-time message handling with asynchronous processing
- Custom CSS animations and particle effects for enhanced UX
- Session state management for conversation persistence
- Mobile-first responsive design with adaptive layouts
2. LangGraph Orchestrator (client_langraph.py)
- Role: Workflow engine and AI model coordination
- Features:
- Graph-based state management for complex conversation flows
- Conditional routing between direct AI responses and tool execution
- Asynchronous processing with asyncio for concurrent operations
- Tool binding and execution orchestration
- Error handling and retry logic for robust operation
3. MCP Server (MCP_server.py)
- Role: Specialized tool provider using Model Context Protocol
- Tools Available:
- Math Operations:
add,subtract,multiply,divide- Precise arithmetic calculations - Code Generation:
code_generation(Python),webcode_generation(HTML/CSS/JS) - AI-powered code creation with comments and imports - General Queries:
normal_query- Fallback for non-specialized conversations
- Math Operations:
- Features:
- FastMCP framework for efficient tool execution
- Input validation and error handling
- Asynchronous tool processing for performance
4. Google Gemini Integration
- Role: Large Language Model backend
- Configuration:
- Gemini 2.5 Flash Lite model for optimal performance/cost balance
- Configurable temperature, token limits, and retry mechanisms
- Secure API key management via environment variables
Data Flow
- User Input → Streamlit UI captures and displays messages
- Processing → LangGraph evaluates context and determines tool needs
- Tool Execution → MCP Server provides specialized capabilities when required
- AI Response → Google Gemini generates contextual responses
- Output → Results flow back through LangGraph to Streamlit for display
CI/CD and Deployment Overview
┌─────────────┐ ┌─────────────┐ ┌─────────────────┐ ┌─────────────┐
│ GitHub │────►│ Jenkins │────►│ Docker Registry │────►│ AWS │
│ Repository │ │ Pipeline │ │ │ │ Deployment │
└─────────────┘ └─────────────┘ └─────────────────┘ └─────────────┘
▲ │ │ │
│ ▼ ▼ ▼
└─────────── Test Results ────────── Build Artifacts ──────── Production ───────
Deployment Architecture
- Containerization: Docker-based deployment for consistent environments across development, staging, and production
- CI/CD Pipeline:
- Jenkins: Orchestrates the entire CI/CD process with automated triggers from GitHub
- GitHub Integration: Webhooks trigger Jenkins pipelines on code pushes and pull requests
- Automated Testing: Unit tests, integration tests, and security scans run in isolated containers
- Build Process: Docker images are built, tagged, and pushed to secure registry
- Deployment: Automated rollout to AWS with blue-green deployment strategy
- AWS Infrastructure:
- ECS/ECR: Container orchestration and registry for scalable container management
- Load Balancing: Application Load Balancer distributes traffic across multiple instances
- Auto Scaling: EC2 Auto Scaling groups ensure high availability and cost optimization
- RDS: Managed database services for data persistence (if needed)
- CloudWatch: Monitoring and logging for performance metrics and alerts
- VPC: Secure network isolation with proper security groups and subnets
- Environment Management: Secure configuration using AWS Systems Manager Parameter Store and
.envfiles - Security: IAM roles, encrypted secrets, and compliance with AWS security best practices
📋 Prerequisites
- Python 3.8+
- Docker (optional, for containerized deployment)
- Google Gemini API key
🛠️ Installation
-
Clone the repository:
git clone <repository-url> cd MCP_STREAMLIT -
Create a virtual environment:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate -
Install dependencies:
pip install -r requirements.txt
⚙️ Setup
-
Environment Variables: Create a
.envfile in the root directory:google_api_key=your_google_gemini_api_key_here -
API Key Setup:
- Obtain a Google Gemini API key from Google AI Studio
- Add the key to your
.envfile
🚀 Usage
Running the Streamlit App
streamlit run app.py
This will start the web application at http://localhost:8501.
Using the MCP Server Directly
python MCP_server.py
Testing with the Client Script
python client_langraph.py
Docker Deployment
Build and run the application using Docker:
docker build -t mcp-ai-assistant .
docker run -p 8501:8501 --env-file .env mcp-ai-assistant
🔄 CI/CD Pipeline
This project utilizes Jenkins integrated with GitHub for continuous integration and deployment:
- Automated Testing: Jenkins runs unit tests and integration tests on every push to the main branch
- Build Process: Docker images are built and pushed to a container registry
- Deployment: Automated deployment to staging and production environments
- Monitoring: Pipeline status and logs are monitored via Jenkins dashboard
To set up the CI/CD pipeline:
- Configure Jenkins with GitHub webhooks for automatic triggering
- Set up necessary credentials and environment variables in Jenkins
- Use the provided Jenkinsfile for pipeline configuration
💡 Examples
Math Operations
- "What is 15 + 27?"
- "Calculate 100 divided by 5"
- "Multiply 8 by 9 "
Code Generation
- "Generate a Python function to sort a list"
- "Create a Flask web application for a blog"
- "Write HTML and CSS for a responsive navigation bar"
General Queries
- "Explain how recursion works in programming"
- "What are the benefits of using virtual environments?"
📦 Dependencies
streamlit: Web application frameworklangchain-google-genai: Google Gemini integrationlangchain-mcp-adapters: MCP client adapterslanggraph: Workflow orchestrationpython-dotenv: Environment variable managementmcp: Model Context Protocol librarynest-asyncio: Asyncio compatibility.
🤝 Contributing
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
📄 License
This project is licensed under the MIT License - see the file for details.
🙏 Acknowledgments
- Google Gemini for AI capabilities
- LangChain and LangGraph for orchestration
- Streamlit for the web interface
- MCP community for the protocol specification
- Jenkins and GitHub for CI/CD infrastructure
📞 Support
If you encounter any issues or have questions, please open an issue on the GitHub repository.