UtkarshSinghShorthillsAI/LLM_MCP_SERVER
If you are the rightful owner of LLM_MCP_SERVER and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
A Model Context Protocol (MCP) server that provides LLM chatting capabilities as tools with robust conversation history management. This server supports Azure OpenAI and Google Gemini providers.
LLM-MCP-Server
A Model Context Protocol (MCP) server that provides LLM chatting capabilities as tools with robust conversation history management. This server supports Azure OpenAI and Google Gemini providers.
Features
- Conversation History Management: Maintains robust conversation context across multiple exchanges
- Multiple LLM Providers: Support for Azure OpenAI and Google Gemini
- Session Management: Create, manage, and clear conversation sessions
- Tool Integration: Seamless integration with MCP clients
- Error Handling: Comprehensive error handling and logging
- Provider Testing: Built-in connectivity testing for LLM providers
Installation
- Clone this repository:
git clone <repository-url>
cd LLM_MCP_SERVER
- Install dependencies:
npm install
- Set up environment variables:
cp .env.example .env
-
Edit
.envfile with your API keys and configuration -
Build the project:
npm run build
Configuration
Create a .env file in the root directory with the following variables:
Azure OpenAI Configuration
AZURE_OPENAI_API_KEY=your_azure_openai_api_key
AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com/
AZURE_OPENAI_DEPLOYMENT_NAME=gpt-4
Gemini Configuration
GEMINI_API_KEY=your_gemini_api_key
Server Configuration
DEFAULT_LLM_PROVIDER=azure-openai # or 'gemini'
MAX_CONVERSATION_HISTORY=20 # Maximum messages to keep in history
Usage
Starting the Server
For development:
npm run dev
For production:
npm start
MCP Tools
The server provides the following tools:
1. chat
Chat with an LLM with conversation history support.
Parameters:
message(required): The message to send to the LLMsessionId(optional): Session ID to maintain conversation historyprovider(optional): LLM provider to use ('azure-openai' or 'gemini')systemPrompt(optional): System prompt to set conversation context
Example:
{
"message": "Hello, how are you?",
"sessionId": "my-session",
"provider": "azure-openai",
"systemPrompt": "You are a helpful assistant."
}
2. clear_session
Clear a conversation session and its history.
Parameters:
sessionId(required): The session ID to clear
3. list_sessions
List all active conversation sessions.
Parameters: None
4. test_provider
Test connectivity to a specific LLM provider.
Parameters:
provider(required): LLM provider to test
Architecture
src/
├── index.ts # Main MCP server entry point
├── types.ts # TypeScript type definitions
├── config.ts # Configuration management
├── conversation-manager.ts # Conversation history management
├── llm-service.ts # Main LLM service orchestrator
└── providers/
├── azure-openai-provider.ts # Azure OpenAI implementation
└── gemini-provider.ts # Google Gemini implementation
Key Components
- LLMMCPServer: Main MCP server class that handles protocol communication
- LLMService: Orchestrates LLM providers and conversation management
- ConversationManager: Manages conversation sessions and history
- Providers: Individual LLM provider implementations
Conversation History Management
The server maintains robust conversation history with the following features:
- Session-based: Each conversation is isolated by session ID
- Automatic Trimming: Conversations are trimmed to maintain performance
- System Message Preservation: System messages are preserved during trimming
- Timestamp Tracking: All messages include timestamps
- Memory Management: Old conversations can be cleaned up automatically
Error Handling
The server includes comprehensive error handling:
- Provider-specific error messages
- Connection testing capabilities
- Graceful fallbacks
- Detailed logging
Development
Scripts
npm run build: Build the TypeScript projectnpm run dev: Run in development mode with auto-reloadnpm start: Start the production servernpm run clean: Clean build artifacts
Project Structure
The project follows a modular architecture with clear separation of concerns:
- Configuration management is centralized
- Each LLM provider is isolated
- Conversation management is provider-agnostic
- MCP protocol handling is separated from business logic
Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
License
MIT
Support
For issues and support, please check the documentation or create an issue in the repository.