ShawnKyzer/strands-mcp
If you are the rightful owner of strands-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
The Strands Agents MCP Server for Amazon Q is a system designed to scrape, index, and provide access to Strands Agents documentation through an MCP server, making it searchable for Amazon Q Developers.
Strands Agents MCP Server for Amazon Q
This project provides an MCP (Model Context Protocol) server that scrapes the Strands Agents documentation and indexes it in Elasticsearch, making it searchable for Amazon Q Developer.
Architecture
- Documentation Scraper: Python script that crawls the Strands Agents documentation (v1.1.x)
- Elasticsearch Index: Stores scraped documentation with full-text search capabilities
- MCP Server: Provides Amazon Q with access to the indexed documentation
- Docker Compose: Orchestrates all services
Components
scraper/
- Documentation scraping logicmcp_server/
- MCP server implementationelasticsearch/
- Elasticsearch configurationdocker-compose.yml
- Service orchestration
Quick Start
Option 1: FastMCP Web Server (Recommended)
This project now supports web-accessible MCP via FastMCP:
# Install UV if not already installed
curl -LsSf https://astral.sh/uv/install.sh | sh
# Start Elasticsearch
docker-compose up -d elasticsearch
# Run scraper to populate data
uv run scraper
# Start web-accessible MCP server
uv run fastmcp-server
Server Access:
- MCP Endpoint:
http://localhost:8000/mcp/
- For AI Clients: Connect any MCP-compatible client
- For Developers: Use FastMCP Client library
UV Commands:
uv sync # Install/update dependencies
uv run scraper # Run scraper directly
uv run fastmcp-server # Run web MCP server
uv run mcp-server # Run local MCP server (stdin/stdout)
uv add <package> # Add new dependency
uv remove <package> # Remove dependency
Option 2: Using Pre-indexed Data (Docker)
The docker/
directory contains a Docker Compose setup with pre-indexed Strands documentation:
# Navigate to the docker directory
cd docker/
# Start all services (Elasticsearch, Kibana, and data restoration)
docker-compose up -d
# Wait for services to start (about 30-60 seconds)
# Check status
docker-compose ps
# Elasticsearch will be available on port 9200
# Kibana GUI will be available on port 5601
This setup includes:
- Elasticsearch with pre-indexed Strands documentation
- Kibana for data visualization and exploration
- Automatic data restoration from
es-data.tar.gz
Option 3: Traditional Setup (Root Directory)
For a fresh setup without pre-indexed data using traditional Python:
# Start Elasticsearch and Kibana (from root directory)
docker-compose up -d
# Then run the scraper to index documentation
python scraper/main.py
Running the Scraper Locally
Using pip
# Install dependencies
pip install -r requirements.txt
# Install Playwright browsers
playwright install chromium
# Run the scraper to index documentation
python scraper/main.py
Using uv
# Install dependencies
uv sync
# Install Playwright browsers
uv run playwright install chromium
# Run the scraper to index documentation
uv run python scraper/main.py
Running the MCP Server Locally
# Run the MCP server
python mcp_server/main.py
# The MCP server will be available on port 8000
Configuration
- Elasticsearch index:
strands-agents-docs
- MCP server port: 8000
- Kibana GUI port: 5601
- Documentation source: https://strandsagents.com/latest/documentation/docs/
Viewing Data with Kibana
Kibana provides a web-based GUI for exploring and visualizing your Elasticsearch data:
- Access Kibana: Open http://localhost:5601 in your browser (no login required)
- Create Index Pattern:
- Go to Stack Management → Index Patterns
- Create a new index pattern with
strands-agents-docs
- Select
@timestamp
as the time field if available
- Explore Data:
- Use Discover to browse and search through scraped documentation
- Use Dashboard to create visualizations of your data
- Use Dev Tools to run Elasticsearch queries directly
Quick Data Exploration
- Discover Tab: View all indexed documents with full-text search
- Search: Use the search bar to find specific documentation content
- Filters: Apply filters to narrow down results by fields
- Time Range: Adjust time range to see when documents were indexed
Usage with Amazon Q
Configure Amazon Q to use this MCP server by adding the server endpoint to your MCP configuration. See AMAZON_Q_INTEGRATION.md
for detailed instructions.
Usage with Windsurf
Integrate with Windsurf IDE for enhanced development experience with Strands Agents documentation. See WINDSURF_INTEGRATION.md
for setup instructions.
Quick Windsurf Setup
# Copy the configuration file
cp windsurf-mcp-config.json ~/.windsurf/mcp-servers.json
# Restart Windsurf to load the MCP server
Development
Using pip
# Install dependencies
pip install -r requirements.txt
# Run scraper manually
ELASTICSEARCH_URL=http://localhost:9200 python scraper/main.py
# Run MCP server manually
ELASTICSEARCH_URL=http://localhost:9200 python mcp_server/main.py
# Run standalone (Python + Docker Elasticsearch)
python run_standalone.py
# Test the setup
python test_setup.py
Using uv
# Install dependencies
uv sync
# Run scraper manually
ELASTICSEARCH_URL=http://localhost:9200 uv run python scraper/main.py
# Run MCP server manually
ELASTICSEARCH_URL=http://localhost:9200 uv run python mcp_server/main.py
# Run standalone (Python + Docker Elasticsearch)
uv run python run_standalone.py
# Test the setup
uv run python test_setup.py
Integration Options
- Amazon Q Developer - See
AMAZON_Q_INTEGRATION.md
- Windsurf IDE - See
WINDSURF_INTEGRATION.md
- Custom MCP Client - Use the MCP server directly via stdio protocol