tgrunnagle/agent-mcp
If you are the rightful owner of agent-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
An AI agent wrapped in an MCP (Model Context Protocol) server, providing a flexible, production-ready wrapper around pydantic-ai agents.
Agent MCP
An AI agent wrapped in an MCP (Model Context Protocol) server. This project provides a flexible, production-ready wrapper around pydantic-ai agents that can be deployed as an MCP server with conversation context management.
Features
- Multi-Provider Support: Works with OpenAI, Anthropic, and Google/Gemini models
- Conversation Context: Built-in conversation history management via context IDs
- MCP Server: Exposed as an MCP server with FastMCP for easy integration
- Flexible Configuration: Environment-based configuration for different models and providers
- Production-Ready: Docker support with docker-compose for easy deployment
- Type-Safe: Built with pydantic and pydantic-ai for robust type safety
- Testing: Comprehensive test suite with pytest and pytest-asyncio
Project Structure
agent-mcp/
├── src/
│ ├── agent_mcp/ # Main agent implementation
│ │ ├── __init__.py
│ │ ├── __main__.py # MCP server entry point
│ │ ├── agent.py # AgentWrapper class
│ │ └── prompts.py # System prompts
│ └── tests/ # Test suite
│ ├── conftest.py
│ └── integration/ # Integration tests
├── pyproject.toml # Project dependencies
├── Taskfile.yml # Task automation
├── Dockerfile # Container definition
├── docker-compose.yml # Docker compose setup
└── README.md
Quick Start
Prerequisites
Installation
- Clone the repository:
git clone https://github.com/yourusername/agent-mcp.git
cd agent-mcp
- Install dependencies:
uv sync --dev --all-packages --group security
- Configure environment variables:
cp .env.example .env
# Edit .env with your configuration
Required environment variables:
MODEL_PROVIDER: Provider name (openai,anthropic,gemini, orgoogle)MODEL_NAME: Model identifier (e.g.,gpt-4o,claude-sonnet-4-0,gemini-2.0-flash-exp)API_KEY: Your API key (optional if using provider-specific env vars likeOPENAI_API_KEY)
Optional environment variables:
PORT: Server port (default: 8100)DEBUG: Enable debug logging (trueorfalse)
Running Locally
Run the MCP server:
task run
Or using uv directly:
uv run python -m src.agent_mcp
The server will start on http://127.0.0.1:8100 by default.
Running with Docker
Build and run with docker-compose:
task compose
Or manually:
docker-compose up --build
Usage
Using the Chat Tool
The MCP server exposes a chat tool that accepts text queries and returns responses from the AI agent.
HTTP Headers
Control conversation behavior with these optional headers:
X-Context-ID: Conversation context ID to continue a previous conversationX-Max-Turns: Maximum number of agent turns (overrides default)
Example:
curl -X POST http://localhost:8100/chat \
-H "Content-Type: application/json" \
-H "X-Context-ID: session-123" \
-H "X-Max-Turns: 5" \
-d '{"query": "What is the capital of France?"}'
Using the AgentWrapper Directly
from agent_mcp.agent import AgentWrapper
# Initialize the agent
agent = AgentWrapper(
model_provider="anthropic",
model_name="claude-sonnet-4-0",
system_prompt="You are a helpful AI assistant.",
api_key="your-api-key"
)
# Run a query
response = await agent.run("What is the capital of France?")
print(response)
# Continue conversation with context
response = await agent.run(
"What about Germany?",
context_id="session-123"
)
print(response)
Development
Available Tasks
The project uses Task for automation. Available tasks:
task install- Install dependenciestask format- Format code with rufftask lint- Run linting checkstask typecheck- Run type checking with tytask test- Run test suitetask check- Run all checks (lint, typecheck, test, security)task run- Run the MCP server locallytask compose- Build and run with docker-compose
Running Tests
Run the full test suite:
task test
Or with pytest directly:
uv run pytest -v
For integration tests (requires API keys):
uv run pytest src/tests/integration/ -v
Code Quality
Format and fix linting issues:
task format
Run all quality checks:
task check
Architecture
AgentWrapper
The AgentWrapper class () provides:
- Multi-provider support: Automatic model and provider initialization for OpenAI, Anthropic, and Google
- Conversation management: In-memory caching of conversation history by context ID
- Extensibility: Support for custom tools, builtin tools, and MCP toolsets
- Type safety: Full type hints and pydantic validation
MCP Server
The MCP server () provides:
- FastMCP integration: HTTP-based MCP server using FastMCP
- Header-based configuration: Control conversation context and behavior via HTTP headers
- Environment-based setup: Flexible configuration via environment variables
- Production logging: Configurable logging with debug mode support
Configuration
Supported Model Providers
| Provider | Environment Variable | Example Models |
|---|---|---|
| OpenAI | OPENAI_API_KEY | gpt-4o, gpt-4-turbo, gpt-3.5-turbo |
| Anthropic | ANTHROPIC_API_KEY | claude-sonnet-4-0, claude-3-5-sonnet-20241022 |
| Google/Gemini | GOOGLE_API_KEY | gemini-2.0-flash-exp, gemini-1.5-pro |
You can provide API keys either via the API_KEY environment variable or via provider-specific variables listed above.
Custom System Prompts
Modify to customize the agent's behavior:
GENERAL_PURPOSE_PROMPT = """Your custom system prompt here..."""
Future Enhancements
- Redis-based conversation cache for distributed deployments
- Support for custom tools and MCP servers via configuration
- Multiple system prompts selectable at runtime
- Streaming response support
- Authentication and rate limiting
- Metrics and observability
Contributing
Contributions are welcome! Please ensure:
- Code passes all checks:
task check - Tests are added for new features
- Type hints are used throughout
- Documentation is updated
License
This project is licensed under the Apache License 2.0 - see the file for details.
Credits
Built with:
- pydantic-ai - Type-safe AI agent framework
- FastMCP - Fast Model Context Protocol server
- uv - Modern Python package manager
- Task - Task automation tool