hamidkhanjani/mcp-server
If you are the rightful owner of mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
MCP Server is a proxy service for OpenAI's API, designed to enhance request handling with authentication and system prompt injection.
MCP Server
MCP Server is a proxy service for OpenAI's API built with Quarkus. It forwards requests to OpenAI's API while adding authentication and injecting a system prompt.
Features
- Proxies requests to OpenAI's chat completions API
- Adds authentication via API key
- Automatically injects a system prompt to all requests
- Built with Quarkus for fast startup and low memory usage
Architecture
The application consists of the following components:
- MCPController: Handles incoming requests and forwards them to OpenAI
- OpenAIClient: Interface for communicating with OpenAI's API
- ApiKeyFilter: Validates API keys for incoming requests
- ChatRequest: Data model for chat completion requests
Prerequisites
- Java 17 or higher
- Maven 3.8.1 or higher
- Docker (optional, for containerized deployment)
Setup and Installation
Local Development
-
Clone the repository:
git clone https://github.com/hamidkhanjani/mcp-server.git cd mcp-server
-
Configure your OpenAI API key in
src/main/resources/application.properties
:openai.api.key=Bearer sk-your-openai-api-key
-
Build the application:
./mvnw clean package
-
Run the application in development mode:
./mvnw quarkus:dev
Production Deployment
-
Build the application:
./mvnw clean package
-
Run the application:
java -jar target/quarkus-app/quarkus-run.jar
Docker Deployment
-
Build the Docker image:
docker build -t mcp-server .
-
Run the container:
docker run -p 8080:8080 -e OPENAI_API_KEY="Bearer sk-your-openai-api-key" mcp-server
Usage
Authentication
All requests to the MCP Server require authentication with an API key. Include the API key in the Authorization
header:
Authorization: Bearer my-secret-key
Making Requests
Send POST requests to the /v1/chat/completions
endpoint with a JSON body:
curl -X POST \
http://localhost:8080/v1/chat/completions \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer my-secret-key' \
-d '{
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "Hello, how are you?"
}
],
"temperature": 0.7,
"max_tokens": 150
}'
The server will automatically inject a system prompt ("You are a helpful assistant.") at the beginning of the messages array before forwarding the request to OpenAI.
Configuration
The application can be configured through the application.properties
file:
# OpenAI API URL
quarkus.rest-client.openai-client.url=https://api.openai.com
# OpenAI API Key (replace with your actual API key)
openai.api.key=Bearer sk-your-openai-api-key
License
This project is licensed under the MIT License - see the LICENSE file for details.
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.