claude-mcp-server-two

Surfer12/claude-mcp-server-two

3.2

If you are the rightful owner of claude-mcp-server-two and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.

The Claude MCP Server is a project that implements a server adhering to the Model Context Protocol (MCP), providing a standardized way to integrate AI tools and models.

Tools
6
Resources
0
Prompts
0

🧠 Meta-Optimized Hybrid Reasoning Framework

by Ryan Oates
License: Dual β€” AGPLv3 + Peer Production License (PPL)
Contact: ryan_oates@my.cuesta.edu


✨ Purpose

This framework is part of an interdisciplinary vision to combine symbolic rigor, neural adaptability, and cognitive-aligned reasoning. It reflects years of integrated work at the intersection of computer science, biopsychology, and meta-epistemology.

It is not just software. It is a cognitive architecture, and its use is ethically bounded.


πŸ” Licensing Model

This repository is licensed under a hybrid model to balance openness, reciprocity, and authorship protection.

1. For Commons-Aligned Users (students, researchers, cooperatives)

Use it under the Peer Production License (PPL). You can:

  • Study, adapt, and share it freely
  • Use it in academic or nonprofit research
  • Collaborate openly within the digital commons

2. For Public Use and Transparency

The AGPLv3 license guarantees:

  • Network-based deployments must share modifications
  • Derivatives must remain open source
  • Attribution is mandatory

3. For Commercial or Extractive Use

You must not use this work if you are a:

  • For-profit AI company
  • Venture-backed foundation
  • Closed-source platform ...unless you negotiate a commercial license directly.

πŸ“š Attribution

This framework originated in:

Meta-Optimization in Hybrid Theorem Proving: Cognitive-Constrained Reasoning Framework, Ryan Oates (2025)

DOI: [Insert Zenodo/ArXiv link here]
Git commit hash of original release: a17c3f9...
This project’s cognitive-theoretic roots come from studies in:

  • Flow state modeling
  • Symbolic logic systems
  • Jungian epistemological structures

🀝 Community Contributor Agreement

If you are a student, educator, or aligned research group and want to contribute:

  1. Fork this repo
  2. Acknowledge the author and original framework
  3. Use the β€œContributors.md” file to describe your adaptation
  4. Optional: Sign and return the to join the federated research network

🚫 What You May Not Do

  • Integrate this system into closed-source LLM deployments
  • Resell it or offer derivative products without explicit approval
  • Strip author tags or alter authorship metadata

πŸ“¬ Contact

Want to collaborate, cite properly, or license commercially?
Reach out: ryan_oates@my.cuesta.edu

Claude MCP Server

This project implements a server adhering to the Model Context Protocol (MCP), providing a standardized way to integrate AI tools and models. It supports multiple AI providers (OpenAI, Anthropic, and Google) and offers a range of built-in tools for code analysis, web interaction, and more.

Features

  • MCP Compliance: Designed to work with MCP clients, enabling seamless tool integration.
  • Multi-Provider Support: Utilizes OpenAI, Anthropic, and Google's Gemini models. Configure the default provider and API keys via environment variables.
  • Extensible Tooling: Includes a framework for easily adding and managing custom tools. Current tools include:
    • Code Generation (llm_code_generate)
    • Web Requests (web_request)
    • Web Scraping (web_scrape)
    • Code Analysis (code_analyze)
    • Code Documentation (code_document)
    • Code Improvement Suggestions (code_improve)
  • Node.js and Python Servers: Includes both Node.js (primary) and Python (FastAPI) server implementations.
  • Containerization: Docker support for easy deployment and development.
  • Testing: Integrated with Jest (JavaScript) and pytest (Python) for comprehensive testing.
  • Linting and Formatting: Uses ESLint and Prettier to maintain code quality.

Project Structure

.
β”œβ”€β”€ config/             # Configuration files
β”‚   β”œβ”€β”€ .env           # Environment variables (example provided)
β”‚   β”œβ”€β”€ dockerfile     # Docker configuration
β”‚   └── docker-compose.yaml
β”œβ”€β”€ src/                # Source code
β”‚   β”œβ”€β”€ api/           # FastAPI server (Python)
β”‚   β”œβ”€β”€ core/          # Core MCP logic (Python)
β”‚   β”œβ”€β”€ server/        # Node.js server implementations
β”‚   β”œβ”€β”€ tools/         # Individual tool implementations (JavaScript)
β”‚   └── utils/         # Utility functions (JavaScript)
β”œβ”€β”€ tests/              # Test files
└── data/               # Data directory (used by Python server)
    └── monitoring/     # Performance monitoring data

Setup and Installation

  1. Clone the repository:

    git clone <repository_url>
    cd claude-mcp-server
    
  2. Environment Variables:

    Create a .env file in the claude-mcp-server directory (and optionally in config/) by copying the .env.example file:

    cp .env.example .env
    

    Then, fill in your API keys for OpenAI, Anthropic, and Google:

    # .env
    NODE_ENV=development
    PORT=3000
    DEFAULT_AI_PROVIDER=anthropic  # or openai, google
    OPENAI_API_KEY=your-openai-key
    ANTHROPIC_API_KEY=your-anthropic-key
    GOOGLE_API_KEY=your-google-api-key
    
  3. Node.js Server (Recommended):

    • Install Dependencies:

      npm install
      
    • Run in Development Mode:

      npm run dev  # Uses simple-server.js
      # OR
      npm run dev:custom # Uses custom-server.js
      

      The --watch flag automatically restarts the server on code changes.

    • Run in Production Mode:

      npm start # Uses simple-server.js
      
    • Run Tests:

      npm test
      npm run test:watch  # Watch mode
      npm run test:coverage # Generate coverage report
      
    • Linting and Formatting:

      npm run lint
      npm run lint:fix  # Automatically fix linting errors
      npm run format
      npm run format:check
      
  4. Python Server (FastAPI):

    • Install Dependencies (from claude-mcp-server directory):
      pip install -r config/requirements.txt
      
    • Run the Server:
      npm run start:python
      
    • Note: The Python server might be less actively maintained than the Node.js server.

Docker Usage

Docker is the recommended way to run the claude-mcp-server, especially for production deployments. It provides a consistent and isolated environment.

  1. Create a .dockerignore file (Recommended):

    Create a file named .dockerignore in the claude-mcp-server directory with the following content:

    node_modules
    .git
    .DS_Store
    npm-debug.log
    Dockerfile
    docker-compose.yaml
    .env
    tests/
    
  2. Build the Docker Image:

    npm run docker:build
    # or, equivalently:
    # docker-compose build
    
  3. Run the Container (Development Mode - with Hot Reloading):

    npm run docker:run:dev
    # or, equivalently:
    # docker-compose -f config/docker-compose.yaml up --build
    

    This command uses the config/docker-compose.yaml file to:

    • Build the image (if it hasn't been built already or if changes are detected).
    • Start a container named claude-mcp-server.
    • Map port 3000 on your host machine to port 3000 inside the container.
    • Mount the src, data, and config directories as volumes. This means that any changes you make to these directories on your host machine will be immediately reflected inside the running container, allowing for hot-reloading during development.
  4. Run the Container (Production Mode):

    npm run docker:run
    # or, equivalently:
    # docker-compose up
    

    This command starts the container without mounting the local directories as volumes. This is suitable for production because the container will use the code and configuration that were baked into the image during the build process.

  5. Running the Python Server Inside the Docker Container:

    Even though the Node.js server is the default entry point, you can still run the Python server within the running Docker container:

    • Get a Shell Inside the Container:

      docker exec -it claude-mcp-server /bin/bash
      

      This command opens an interactive bash shell inside the running claude-mcp-server container.

    • Run the Python Server:

      python src/api/server.py
      

      The Python server will run on port 8000 inside the container. To access it from your host, either adjust docker-compose.yaml to expose port 8000 or use curl from within the container.

  6. Stopping the Container:

    docker-compose down
    

Usage

Once the server is running (either Node.js or Python), you can interact with it via MCP-compliant clients. The server exposes tools that can be invoked using a JSON-RPC 2.0 protocol. The specific tool names and parameters are defined within the src/tools directory (for the Node.js server) and src/core/mcp_tools.py (for the Python server). Refer to howTO.md for available tools.

Contributing

See the main mcp-projects/README.md for general contributing guidelines.

License

MIT License