clydeiii/hardtoget-mcp
If you are the rightful owner of hardtoget-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
This is a Model Context Protocol (MCP) server designed for playing the cooperative word association game 'Hard to Get' with LLMs.
Hard to Get - MCP Server
This is a Model Context Protocol (MCP) server that allows LLMs to play the cooperative word association board game "Hard to Get" with each other. The server manages game state, coordinates player turns, and tracks game results.
Overview
Hard to Get is a cooperative word association game where two players work together:
- The Witness knows the secret "key word" and must provide clues
- The Detective must eliminate words based on the Witness's clues to find the key word
The game lasts 5 rounds, and each round:
- The Witness is given a dilemma with two options (e.g., "Hot vs. Cold")
- The Witness chooses which option better matches the key word
- The Detective eliminates at least one word from the board that doesn't match the chosen option
- The game continues until all non-key words are eliminated, the key word is eliminated, or 5 rounds are completed
Requirements
- Python 3.7+
- Flask
- Flask-SocketIO
- SQLite3
- Python-SocketIO client (for the client implementation)
You can install the dependencies with:
pip install flask flask-socketio python-socketio requests
Project Structure
āāā server.py # MCP server implementation
āāā client.py # Client implementation for LLMs
āāā words.txt # 500 words/phrases for the game board
āāā dilemmas.txt # 150 dilemmas for the Witness to evaluate
āāā hard_to_get.db # SQLite database (created automatically)
āāā README.md # This file
Running the Server
- Make sure you have all dependencies installed
- Run the server:
python server.py
The server will start on http://localhost:5000
Client API
Clients (LLMs) interact with the server using the following API endpoints:
1. Register a new client
POST /register
Body: {"model_name": "model-name-string"}
Response: {"client_id": "uuid", "status": "registered"}
2. Join a game
POST /join_game
Body: {"client_id": "uuid", "preferred_role": "Witness|Detective|null"}
Response: {
"game_id": "uuid",
"role": "Witness|Detective",
"game_ready": true|false,
"board": ["word1", "word2", ...]
}
3. Submit Witness choice
POST /witness_choice
Body: {
"game_id": "uuid",
"client_id": "uuid",
"dilemma_choice": "selected-dilemma-option"
}
Response: {"status": "success"}
4. Submit Detective choice
POST /detective_choice
Body: {
"game_id": "uuid",
"client_id": "uuid",
"eliminated_words": ["word1", "word2", ...]
}
Response: {
"status": "success",
"game_over": true|false,
"win": true|false,
"remaining_words": ["word1", "word2", ...],
"key_word_eliminated": true|false
}
Real-time Notifications
The server uses Socket.IO to notify clients about game events:
- Connect to the Socket.IO server and join the game and client rooms:
socket.emit('join', {'client_id': clientId, 'game_id': gameId});
- Listen for events:
game_started
: Notifies when a game has startedwitness_turn
: Tells the Witness it's their turn, provides the key word and dilemmadetective_turn
: Tells the Detective it's their turn, provides the dilemma and Witness's choicegame_ended
: Notifies both players of game end and result
Database Schema
The server maintains three tables:
1. clients
uuid
: Unique client identifiermodel_name
: String identifying the LLM modelstatus
: Client status (available, searching, in_game)
2. games
id
: Unique game identifierwitness_uuid
: UUID of the Witness clientdetective_uuid
: UUID of the Detective clientstatus
: Game status (pending, active, completed)key_word
: The secret word the Detectives must findcurrent_round
: Current game round (1-5)board
: JSON string of words currently on the board
3. results
game_id
: Game identifierwitness_uuid
: UUID of the Witness clientwitness_model
: Model name of the Witnessdetective_uuid
: UUID of the Detective clientdetective_model
: Model name of the Detectiveresult
: Game result (win or loss)
Running the Client
A sample client implementation is provided in client.py
. To run it:
python client.py http://localhost:5000 "model-name" [role]
Arguments:
http://localhost:5000
: Server URLmodel-name
: LLM model name (e.g., "gpt-4", "claude-3")role
(optional): Preferred role ("Witness" or "Detective")
The sample client shows how to connect to the server, interact with the API, and handle Socket.IO events. In a real implementation, the LLM would make the game decisions based on its language model capabilities.
Implementing LLM Clients
To create a real LLM client for this game:
- Replace the
choose_dilemma_side
method with LLM-based decision making to evaluate which side of the dilemma better matches the key word - Replace the
choose_eliminations
method with LLM-based decision making to determine which words to eliminate based on the dilemma and Witness's choice
The current client implementation includes placeholder logic that should be replaced with actual LLM calls in a production system.
Generating Data Files
The server will automatically generate words.txt
and dilemmas.txt
if they don't exist. However, you can customize these files to include your own words and dilemmas.
words.txt
: One word/phrase per linedilemmas.txt
: One dilemma per line, with options separated by comma (e.g., "Hot,Cold")