Auxilus08/github-docs-mcp-server
If you are the rightful owner of github-docs-mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
A Model Context Protocol (MCP) server that provides AI-powered Q&A capabilities for GitHub repositories using OpenAI GPT-4 or GPT-3.5-turbo.
GitHub Docs MCP Server
A Model Context Protocol (MCP) server that provides AI-powered Q&A capabilities for GitHub repositories using OpenAI GPT-4 or GPT-3.5-turbo.
๐ค OpenAI Integration
This server integrates with OpenAI's GPT models to provide intelligent, context-aware answers to questions about GitHub repositories. The AI analyzes repository documentation, code, and issues to generate comprehensive responses.
Features
- AI-Powered Responses: Uses GPT-4 or GPT-3.5-turbo to generate intelligent answers
- Context-Aware: Analyzes repository docs, code snippets, and issues
- Markdown Formatting: Returns well-formatted answers with proper structure
- Source Links: Includes links back to original GitHub files and issues
- Fallback Support: Gracefully falls back to template responses if OpenAI API fails
- Configurable: Support for different OpenAI models and parameters
Setup
-
Get an OpenAI API Key
- Visit OpenAI API
- Create a new API key
- Ensure you have credits available
-
Configure Environment Variables
# Copy the example environment file cp .env.example .env # Edit .env and add your keys OPENAI_API_KEY=your_openai_api_key_here GITHUB_TOKEN=your_github_token_here # Optional: Configure model preferences OPENAI_MODEL=gpt-4 # or gpt-3.5-turbo OPENAI_MAX_TOKENS=1500 OPENAI_TEMPERATURE=0.3
-
Install Dependencies
pip install -r requirements.txt
-
Validate Installation
python validate_openai.py
๐ Usage
Start the Server
python -m src.github_docs_mcp.main
The server will start on http://localhost:8000
Check Service Status
curl http://localhost:8000/status
This endpoint shows the status of all services including OpenAI integration:
{
"server": {
"name": "github-docs-qa",
"version": "1.0.0",
"status": "running"
},
"services": {
"openai_service": {
"status": "healthy",
"model": "gpt-4",
"features": {
"ai_answers": true,
"fallback_answers": true
}
}
}
}
Ask Questions
Send POST requests to /ask
with repository questions:
curl -X POST http://localhost:8000/ask \
-H "Content-Type: application/json" \
-d '{
"repository": "microsoft/vscode",
"question": "How do I create a VS Code extension?",
"include_code": true,
"include_issues": false,
"max_results": 5
}'
Response Format
The AI generates responses in Markdown format with:
- Structured answers with headers and sections
- Code examples with proper syntax highlighting
- Source links to original GitHub files
- Contextual information based on repository content
Example response:
{
"question": "How do I create a VS Code extension?",
"answer": "# Creating a VS Code Extension\n\nTo create a VS Code extension, you'll need to...\n\n## Getting Started\n\n1. Install the required tools\n2. Generate the extension scaffold\n3. Configure your extension\n\n## ๐ Sources\n\n1. **Documentation:** [extension-authoring.md](https://github.com/microsoft/vscode/blob/main/docs/extension-authoring.md)\n2. **Code:** [example-extension.js](https://github.com/microsoft/vscode/blob/main/examples/example-extension.js)",
"repository": "microsoft/vscode",
"sources": [...],
"confidence": 0.85,
"processing_time_ms": 2500
}
๐ Fallback Behavior
When OpenAI API is not available or fails, the server automatically falls back to template-based responses:
- No API Key: Uses structured templates with source content
- API Errors: Gracefully handles rate limits and other errors
- Network Issues: Continues serving requests with fallback responses
๐งช Testing
Test the OpenAI integration:
# Run validation checks
python validate_openai.py
# Test integration with live server
python test_openai_integration.py
# Run full test suite
python test_runner.py
๐ Monitoring
Monitor OpenAI usage through:
- Service Status:
/status
endpoint shows AI service health - Server Logs: Detailed logging of AI interactions
- Response Analysis: Check for AI vs fallback responses
โ๏ธ Configuration
OpenAI Models
Supported models:
gpt-4
(recommended, higher quality)gpt-3.5-turbo
(faster, lower cost)
Parameters
- Max Tokens: Control response length (default: 1500)
- Temperature: Control creativity (default: 0.3 for factual responses)
- Timeout: API call timeout (configured automatically)
Rate Limiting
The server respects OpenAI rate limits and handles:
- Rate limit errors: Automatic fallback to templates
- Token limits: Smart content truncation
- Cost management: Configurable token limits
๐ง Troubleshooting
Common Issues
-
"OpenAI service unavailable"
- Check API key in
.env
file - Verify OpenAI account has credits
- Test API key with OpenAI directly
- Check API key in
-
Slow responses
- Try
gpt-3.5-turbo
for faster responses - Reduce
max_results
in requests - Check network connectivity
- Try
-
Rate limit errors
- Upgrade OpenAI plan for higher limits
- Implement request queuing if needed
- Monitor usage in OpenAI dashboard
Debug Mode
Enable detailed logging:
DEBUG=true LOG_LEVEL=debug python -m src.github_docs_mcp.main
๐ก๏ธ Security
- API Keys: Never commit API keys to version control
- Environment Variables: Store sensitive data in
.env
files - Request Validation: All inputs are validated and sanitized
- Error Handling: Sensitive information is not exposed in error messages
๐ Performance
Typical response times:
- GPT-4: 2-5 seconds
- GPT-3.5-turbo: 1-3 seconds
- Fallback: < 1 second
Optimization tips:
- Use
gpt-3.5-turbo
for speed-critical applications - Limit
max_results
to reduce context size - Cache frequently asked questions
- Monitor token usage to optimize costs
๐ค Contributing
When contributing to OpenAI features:
- Test with both GPT-4 and GPT-3.5-turbo
- Ensure fallback behavior works correctly
- Add appropriate error handling
- Update documentation and examples
- Test with various repository types and question formats
๐ License
MIT License - see for details.