Lasimeri/MCP_Server
If you are the rightful owner of MCP_Server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
The LM Studio MCP Proxy Server bridges Cursor IDE with LM Studio, enabling seamless integration of local AI models with advanced coding capabilities.
LM Studio MCP Proxy Server
A Model Context Protocol (MCP) server that bridges Cursor IDE with LM Studio, enabling seamless integration of local AI models with advanced coding capabilities. This server specifically fixes the "The model does not work with your current plan or API key" error in Cursor.
š Features
- š§ OpenAI API Compatibility: Full OpenAI API specification compliance
- š ļø MCP Protocol Support: Complete MCP implementation with tools and resources
- šÆ Custom Model Integration: Seamless integration with LM Studio models
- š Error Resolution: Fixes Cursor's API key validation errors
- š Health Monitoring: Built-in health checks and diagnostics
- š Dual Mode: HTTP server and STDIO MCP server support
šÆ Problem Solved
This server specifically addresses the "The model does not work with your current plan or API key" error that occurs when trying to use custom models in Cursor IDE. It provides a proper OpenAI-compatible endpoint that Cursor can use without validation errors.
š Prerequisites
- Rust (latest stable version)
- LM Studio running on port 1234
- Cursor IDE (latest version)
- DeepSeek R1 model loaded in LM Studio
š Quick Start
Method 1: Custom OpenAI Endpoint (Recommended - Fixes API Key Error)
-
Start the server:
cargo run
-
Configure Cursor:
- Open Cursor Settings ā Models
- Scroll to "OpenAI API Key" section
- Enable "Override OpenAI Base URL"
- Set Base URL:
http://127.0.0.1:3031/v1
- Set API Key:
sk-dummy-key-for-development
- Click "Verify"
- Disable all other models
- Enable only:
deepseek/deepseek-r1-0528-qwen3-8b
-
Test in Cursor:
- Open a new chat
- Select the DeepSeek model
- Send a message - no more API key errors!
Method 2: MCP Server Integration
-
Start MCP server:
cargo run -- --mcp
-
Configure Cursor MCP:
- Open Cursor Settings ā Cursor Settings ā MCP
- Click "Add new MCP server"
- Use the configuration from
cursor-mcp-config.json
š§ Configuration
Server Configuration
The server supports two modes:
HTTP Server Mode (Default)
cargo run
- Port: 3031
- Health Check:
http://127.0.0.1:3031/health
- OpenAI Endpoint:
http://127.0.0.1:3031/v1
MCP Server Mode
cargo run -- --mcp
- Transport: STDIO
- Protocol: JSON-RPC 2.0
- Features: Tools, Resources, Prompts
Supported Models
The server supports these models by default:
deepseek/deepseek-r1-0528-qwen3-8b
gpt-3.5-turbo
gpt-4
gpt-4-turbo
gpt-4o
gpt-4o-mini
To add custom models, edit the SUPPORTED_MODELS
constant in src/main.rs
.
š ļø API Endpoints
Health Check
curl http://127.0.0.1:3031/health
List Models
curl http://127.0.0.1:3031/v1/models
Get Model Info
curl http://127.0.0.1:3031/v1/models/deepseek/deepseek-r1-0528-qwen3-8b
Chat Completion
curl -X POST http://127.0.0.1:3031/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-dummy-key-for-development" \
-d '{
"model": "deepseek/deepseek-r1-0528-qwen3-8b",
"messages": [
{
"role": "user",
"content": "Hello, can you help me with coding?"
}
]
}'
š Troubleshooting
Common Issues
1. "The model does not work with your current plan or API key"
Solution: Use Method 1 (Custom OpenAI Endpoint) instead of MCP integration.
2. Port 3031 already in use
Solution:
# Find the process using port 3031
netstat -ano | findstr 3031
# Kill the process (replace PID with actual process ID)
taskkill /PID <PID> /F
# Or use a different port by modifying src/main.rs
3. LM Studio not responding
Solution:
# Check if LM Studio is running
curl http://127.0.0.1:1234/v1/models
# Restart LM Studio and ensure the model is loaded
4. Cursor not connecting
Solution:
- Verify the base URL:
http://127.0.0.1:3031/v1
- Use any API key (e.g.,
sk-dummy-key-for-development
) - Disable all other models in Cursor
- Restart Cursor after configuration changes
Debug Mode
Enable debug logging:
RUST_LOG=debug cargo run
š Project Structure
MCP_Server/
āāā src/
ā āāā main.rs # Main server implementation
āāā Cargo.toml # Rust dependencies
āāā README.md # This file
āāā CURSOR_SETUP_GUIDE.md # Detailed setup instructions
āāā cursor-mcp-config.json # Configuration examples
āāā mcp.json # MCP server configuration
āāā .gitignore # Git ignore rules
š§ Development
Building
cargo build
cargo build --release
Testing
cargo test
cargo check
Running in Development
# With debug logging
RUST_LOG=debug cargo run
# With specific log level
RUST_LOG=info cargo run
š Deployment
Local Development
cargo run
Production
cargo build --release
./target/release/MCP_Server
Docker (Future)
FROM rust:1.70 as builder
WORKDIR /app
COPY . .
RUN cargo build --release
FROM debian:bullseye-slim
COPY /app/target/release/MCP_Server /usr/local/bin/
EXPOSE 3031
CMD ["MCP_Server"]
š Security Considerations
Development Mode
- Accepts any API key for convenience
- No authentication required
- Suitable for local development only
Production Mode
- Implement proper API key validation
- Add rate limiting
- Use HTTPS with certificates
- Configure proper CORS policies
- Add monitoring and logging
š¤ Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
š License
This project is licensed under the MIT License - see the LICENSE file for details.
š Acknowledgments
- LM Studio for providing the local AI model infrastructure
- Cursor IDE for the excellent development environment
- Model Context Protocol for the standardization efforts
- OpenAI for the API specification that made this possible
š Support
If you encounter issues:
- Check the troubleshooting section above
- Review the
CURSOR_SETUP_GUIDE.md
for detailed instructions - Check the server logs for error messages
- Verify LM Studio is running and accessible
- Ensure the model is properly loaded in LM Studio
š Changelog
v1.0.0
- ā Initial release
- ā OpenAI API compatibility
- ā MCP protocol support
- ā Custom model integration
- ā API key error resolution
- ā Health monitoring
- ā Comprehensive documentation
Happy coding with your local AI models! š