waifuai/mcp-waifu-queue
If you are the rightful owner of mcp-waifu-queue and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
MCP Waifu Queue (Gemini Edition) is an MCP server for a conversational AI character using the Google Gemini API and Redis for asynchronous processing.
MCP Waifu Queue
This project implements an MCP (Model Context Protocol) server for a conversational AI "waifu" character, leveraging the Google Gemini API via a Redis queue for asynchronous processing. It utilizes the FastMCP library for simplified server setup and management.
Table of Contents
- Features
- Architecture
- Prerequisites
- Installation
- Configuration
- Running the Service
- MCP API
- Testing
- Troubleshooting
- Contributing
- License
Features
- Text generation via provider abstraction:
- OpenRouter (default) using model from
~/.model-openrouterordeepseek/deepseek-chat-v3-0324:free. - Google Gemini supported as fallback or via selection, model from
~/.model-geminiorgemini-2.5-pro.
- OpenRouter (default) using model from
- Request queuing using Redis for handling concurrent requests asynchronously.
- MCP-compliant API using
FastMCP. - Job status tracking via MCP resources.
- Configuration via environment variables (
.envfile). - Provider selection:
- Default provider: OpenRouter
- Override via
PROVIDER=openrouterorPROVIDER=gemini
- API key loading:
- OpenRouter:
OPENROUTER_API_KEYor~/.api-openrouter - Gemini:
GEMINI_API_KEYorGOOGLE_API_KEYor~/.api-gemini
- OpenRouter:
- Model selection files in home directory:
~/.model-openrouterfor OpenRouter model name~/.model-geminifor Gemini model name
Architecture
The project consists of several key components:
main.py: The main entry point, initializing theFastMCPapplication and defining MCP tools/resources.respond.py: Contains the core text generation logic using the Google GenAI SDK (google-genai) via the centralizedgenai.Client.task_queue.py: Handles interactions with the Redis queue (usingpython-rq), enqueuing generation requests.utils.py: Contains utility functions, specificallycall_predict_responsewhich is executed by the worker to call the Gemini logic inrespond.py.worker.py: A Redis worker (python-rq) that processes jobs from the queue, callingcall_predict_response.config.py: Manages configuration usingpydantic-settings.models.py: Defines Pydantic models for MCP request and response validation.
The flow of a request is as follows:
- A client sends a request to the
generate_textMCP tool (defined inmain.py). - The tool enqueues the request (prompt) to a Redis queue (handled by
task_queue.py). - A
worker.pyprocess picks up the job from the queue. - The worker executes the
call_predict_responsefunction (fromutils.py). call_predict_responsecalls thepredict_responsefunction (inrespond.py), which interacts with the Gemini API.- The generated text (or an error message) is returned by
predict_responseand stored as the job result by RQ. - The client can retrieve the job status and result using the
job://{job_id}MCP resource (defined inmain.py).
graph LR
subgraph Client
A[User/Client] -->|1. Send Prompt via MCP Tool| B(mcp-waifu-queue: main.py)
end
subgraph mcp-waifu-queue Server
B -->|2. Enqueue Job (prompt)| C[Redis Queue]
B -->|7. Return Job ID| A
D[RQ Worker (worker.py)] --|>| C
D -->|3. Dequeue Job & Execute| E(utils.call_predict_response)
E -->|4. Call Gemini Logic| F(respond.predict_response)
F -->|5. Call Gemini API| G[Google Gemini API]
G -->|6. Return Response| F
F --> E
E -->|Update Job Result in Redis| C
A -->|8. Check Status via MCP Resource| B
B -->|9. Fetch Job Status/Result| C
B -->|10. Return Status/Result| A
end
Prerequisites
- Python 3.7+
piporuv(Python package installer)- Redis server (installed and running)
- An OpenRouter API Key and or a Google Gemini API Key
You can find instructions for installing Redis on your system on the official Redis website: https://redis.io/docs/getting-started/ You can obtain a Gemini API key from Google AI Studio: https://aistudio.google.com/app/apikey
Installation
-
Clone the repository:
git clone <YOUR_REPOSITORY_URL> cd mcp-waifu-queue -
Create and activate a virtual environment using
uv:python -m uv venv .venv .venv/Scripts/python.exe -m ensurepip .venv/Scripts/python.exe -m pip install uv -
Install dependencies:
.venv/Scripts/python.exe -m uv pip install -r requirements.txt .venv/Scripts/python.exe -m uv pip install -r requirements-dev.txt
Configuration
-
Provider Selection:
- Default provider is OpenRouter. To override, set:
orPROVIDER=openrouterPROVIDER=gemini
- Default provider is OpenRouter. To override, set:
-
Model Names via files in $HOME:
- OpenRouter model file:
echo "deepseek/deepseek-chat-v3-0324:free" > ~/.model-openrouter - Gemini model file:
echo "gemini-2.5-pro" > ~/.model-gemini
- OpenRouter model file:
-
API Keys: Preferred via environment variables with file fallback:
- OpenRouter:
OPENROUTER_API_KEYor~/.api-openrouter - Gemini:
GEMINI_API_KEYorGOOGLE_API_KEYor~/.api-gemini
echo "YOUR_API_KEY_HERE" > ~/.api-gemini(Replace
YOUR_API_KEY_HEREwith your actual key) - OpenRouter:
-
Other Settings: Copy the
.env.examplefile to.env:cp .env.example .env -
Modify the
.envfile to set the remaining configuration values:MAX_NEW_TOKENS: Maximum number of tokens for the Gemini response (default:2048).REDIS_URL: The URL of your Redis server (default:redis://localhost:6379).FLASK_ENV,FLASK_APP: Optional, related to Flask if used elsewhere, not core to the MCP server/worker operation.
Running the Service
-
Ensure Redis is running. If you installed it locally, you might need to start the Redis server process (e.g.,
redis-servercommand, or via a service manager). -
Start the RQ Worker: Open a terminal, activate your virtual environment (
source .venv/bin/activateor similar), and run:python -m mcp_waifu_queue.workerThis command starts the worker process, which will listen for jobs on the Redis queue defined in your
.envfile. Keep this terminal running. -
Start the MCP Server: Open another terminal, activate the virtual environment, and run the MCP server using a tool like
uvicorn(you might need to install it:pip install uvicornoruv pip install uvicorn):uvicorn mcp_waifu_queue.main:app --reload --port 8000 # Example portReplace
8000with your desired port. The--reloadflag is useful for development.Alternatively, you can use the
start-services.shscript (primarily designed for Linux/macOS environments) which attempts to start Redis (if not running) and the worker in the background:# Ensure the script is executable: chmod +x ./scripts/start-services.sh ./scripts/start-services.sh # Then start the MCP server manually as shown above.
MCP API
The server provides the following MCP-compliant endpoints:
Tools
generate_text- Description: Sends a text generation request to the Gemini API via the background queue.
- Input:
{"prompt": "Your text prompt here"}(Type:GenerateTextRequest) - Output:
{"job_id": "rq:job:..."}(A unique ID for the queued job)
Resources
job://{job_id}- Description: Retrieves the status and result of a previously submitted job.
- URI Parameter:
job_id(The ID returned by thegenerate_texttool). - Output:
{"status": "...", "result": "..."}(Type:JobStatusResponse)status: The current state of the job (e.g., "queued", "started", "finished", "failed"). RQ uses slightly different terms internally ("started" vs "processing", "finished" vs "completed"). The resource maps these.result: The generated text from Gemini if the job status is "completed", otherwisenull. If the job failed, the result might benullor contain error information depending on RQ's handling.
Testing
The project includes tests. Ensure you have installed the test dependencies (pip install -e .[test] or uv pip install -e .[test]).
Run tests using pytest:
pytest tests
Note: Tests might require mocking Redis (fakeredis) and potentially the Gemini API calls depending on their implementation.
Troubleshooting
- Error:
OpenRouter API key not available: EnsureOPENROUTER_API_KEYis set or~/.api-openrouterexists with your key on a single line (no whitespace). - Error:
Gemini API key not available: EnsureGEMINI_API_KEYorGOOGLE_API_KEYis set, or~/.api-geminiexists with your key on a single line. - Error during Gemini API call (e.g., AuthenticationError, PermissionDenied): Double-check that the API key in
~/.api-gemini(or the fallback environment variable) is correct and valid. Ensure the API is enabled for your Google Cloud project if applicable. - Jobs stuck in "queued": Verify that the RQ worker (
python -m mcp_waifu_queue.worker) is running in a separate terminal and connected to the same Redis instance specified in.env. Check the worker logs for errors. - ConnectionRefusedError (Redis): Make sure your Redis server is running and accessible at the
REDIS_URLspecified in.env. - MCP Server Connection Issues: Ensure the MCP server (
uvicorn ...) is running and you are connecting to the correct host/port.
Contributing
- Fork the repository.
- Create a new branch for your feature or bug fix (
git checkout -b feature/your-feature-name). - Make your changes and commit them (
git commit -am 'Add some feature'). - Push your branch to your forked repository (
git push origin feature/your-feature-name). - Create a new Pull Request on the original repository.
Please adhere to the project's coding standards and linting rules (ruff).
License
This project is licensed under the MIT-0 License - see the file for details.