MCP-SERVER

Acoste616/MCP-SERVER

3.1

If you are the rightful owner of MCP-SERVER and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.

The MCP Server is a local Model Context Protocol server designed to store and manage project context, enabling multiple AI models to share the same information through a REST and WebSocket API.

MCP Server

A local Model Context Protocol server that stores project context (bookmarks, files, notes) and exposes a REST + WebSocket API so multiple AI models (OpenAI GPT-4o, Anthropic Claude 3, local LM Studio) can share the same information.

Features

  • Store and manage project context (files, bookmarks, notes)
  • Integrate with multiple AI model providers:
    • OpenAI (GPT-4o, GPT-4o-mini)
    • Anthropic Claude
    • LM Studio (local models)
  • WebSocket API for real-time updates
  • Token usage tracking and budget management
  • Async/await architecture with FastAPI

Project Structure

mcp-server/
ā”œā”€ā”€ backend/               # Python backend code
│   ā”œā”€ā”€ app/               # Application code
│   │   ā”œā”€ā”€ api/           # API endpoints
│   │   ā”œā”€ā”€ core/          # Core functionality
│   │   ā”œā”€ā”€ models/        # SQLModel definitions
│   │   └── services/      # Business logic services
│   ā”œā”€ā”€ tests/             # Test suite
│   └── alembic/           # Database migrations
ā”œā”€ā”€ docker/                # Docker configuration
└── docs/                  # Documentation

Prerequisites

  • Python 3.12+
  • Poetry for dependency management
  • Optional: LM Studio for local models

Installation

  1. Clone the repository:

    git clone https://github.com/yourusername/mcp-server.git
    cd mcp-server
    
  2. Install dependencies with Poetry:

    poetry install
    
  3. Create a .env file with your configuration:

    OPENAI_API_KEY=your_openai_key
    ANTHROPIC_API_KEY=your_anthropic_key
    LM_STUDIO_BASE_URL=http://localhost:1234/v1
    

Running the Server

Start the development server:

cd backend
poetry run uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload

The API will be available at http://localhost:8000

API Documentation

Once the server is running, you can access the interactive API documentation at:

License

MIT

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.