python-mcp-server
If you are the rightful owner of python-mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
This MCP server runs a Python environment to execute code generated by an LLM, useful for web scraping.
Sample MCP Server in Go.
This MCP server runs a Python environment when called that can be used to execute the code generated by an LLM. It can be useful for scraping web content.
This project is based on this demo on YouTube with a few slight mods.
This demo uses Podman as the container engine instead of Docker.
Build the project
This project requires Go 1.23+. To build locally:
$ go mod tidy && go install
Testing with a lab host
For testing use mcphost
: https://github.com/mark3labs/mcphost
Configure the MCP server
Edit the ~/.mcp.json
file adding the following content:
{
"mcpServers": {
"python-repl": {
"command": "python-mcp-server"
}
}
}
This configuration tells the mcphost to run the python-mcp-server
command aliased as python-repl
.
Running mcphost
This example uses Claude-3.5 Sonnet as the defaul LLM. This allows testing without a local GPU.
Export the Anthropic key:
$ export ANTHROPIC_API_KEY='your-api-key'
Execute the host
$ mcphost
It should prompt something like this:
$ mcphost
2025/04/28 21:44:32 INFO Model loaded provider=anthropic model=claude-3-5-sonnet-latest
2025/04/28 21:44:32 INFO Initializing server... name=python-repl
2025/04/28 21:44:32 INFO Server connected name=python-repl
2025/04/28 21:44:32 INFO Tools loaded server=python-repl count=1
Enter your prompt (Type /help for commands, Ctrl+C to quit)
From now on it is possible to prompt requests that imply a web scraping to see the MCP server in action.
Enter your prompt (Type /help for commands, Ctrl+C to quit)
Show me the most starred repositories on github
Assistant:
I'll create a script to fetch the most starred repositories from GitHub using their API:
2025/04/28 22:14:16 INFO 🔧 Using tool name=python-repl__execute-python
2025/04/28 22:14:16 INFO Usage statistics input_tokens=1665 output_tokens=309 total_tokens=1974