c12k/agent_1
3.1
If you are the rightful owner of agent_1 and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
An AI agent built with Streamlit, LangChain, and MCP tools to query GitHub.
Tools
1
Resources
0
Prompts
0
AI Agent
An agent built with Streamlit, LangChain, and MCP tools that can query github.
GitHub API Setup
To use the GitHub search functionality, you need to set up a GitHub API token:
-
Create a GitHub Personal Access Token:
- Go to GitHub Settings > Developer settings > Personal access tokens
- Click "Generate new token (classic)"
- Select scopes:
public_repo(for public repositories) orrepo(for private repositories) - Copy the generated token
-
Configure the token:
- Create a
.envfile in the project root - Add your token:
GITHUB_TOKEN=your_actual_token_here - The
.envfile is already in.gitignorefor security
- Create a
Docker Setup
To run all components using Docker Compose:
# Build and start all services
docker-compose up --build
# Run in background
docker-compose up -d --build
# Stop all services
docker-compose down
Services
- Ollama (port 11434): LLM service (for mac run a local install as docker ollama is slow and doesn't use mac gpu)
- MCP Server (port 3001): Search service (an MCP tool for the AI agent)
- API Server (port 3002): FastAPI backend (to host the AI agent)
- Streamlit App (port 8501): Web interface (mostly vibe coded as throwaway prototype to test)
Access Points
- Web UI: http://localhost:8501
- API: http://localhost:3002
- Ollama: http://localhost:11434
First Run
You need to install ollama locally
# For Mac
brew install ollama
# For Linux
curl -fsSL https://ollama.com/install.sh | sh
# For Windows
# Download and install from: https://ollama.com/download/windows
You'll also need to pull a model in Ollama:
# qwen:4b is small enough and has reasoning capabilities
ollama pull qwen:4b