gitmur444/mcp_server_mgx
If you are the rightful owner of mcp_server_mgx and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
MCP Server is a command-line program executor that leverages OpenAI integration for intelligent configuration suggestions and log analysis.
MCP Server - Command Line Program Executor
MCP (Model Context Protocol) Server that executes command-line programs based on README file analysis, with OpenAI integration for intelligent configuration suggestions and log analysis.
🚀 Quick Start
Prerequisites
- Python 3.8+
- OpenAI API key (optional, for AI features)
- Linux or macOS (Windows support coming soon)
Installation
-
Clone or extract the project:
cd mcp_server_project -
Install dependencies:
pip install -r requirements.txt -
Install the package in development mode:
pip install -e .
Configuration
-
Copy the example config:
cp config.yaml.example config.yaml -
Edit config.yaml and add your OpenAI API key:
openai: api_key: "your-openai-api-key-here" model: "gpt-4"
🔥 How to Run
Method 1: Command Line Interface
# Run the MCP server
python -m mcp_server.main
# Or use the CLI directly
mcp-server --help
Method 2: Python Module
# Run from the project directory
python -m mcp_server.cli
Method 3: Direct Script Execution
# Run the main server
python mcp_server/main.py
📋 Usage Examples
Basic Usage
from mcp_server import MCPServer
# Initialize the server
server = MCPServer()
# Execute a command with README analysis
result = server.execute_command(
prompt="Run git status and explain the output",
readme_path="./README.md"
)
print(result)
Advanced Features
# Get AI-powered flag recommendations
flags = server.recommend_flags(
command="docker run",
context="development environment"
)
# Analyze logs with AI
analysis = server.analyze_logs(
log_content="Error: permission denied",
command="docker build"
)
🧪 Testing
Run the test suite:
python test_mcp_server.py
Or run specific tests:
python -m pytest test_mcp_server.py -v
🛠️ Development
Project Structure
mcp_server/
├── __init__.py # Package initialization
├── main.py # Main server entry point
├── cli.py # Command line interface
├── config.py # Configuration management
├── command_executor.py # Command execution logic
├── readme_parser.py # README file analysis
├── openai_module.py # OpenAI API integration
├── log_analyzer.py # Log analysis with AI
├── flag_recommender.py # AI-powered flag suggestions
├── context_manager.py # Execution context management
├── security.py # Security and validation
├── models.py # Data models
└── api_gateway.py # API gateway (future)
Key Features
- 🤖 AI Integration: OpenAI-powered command suggestions and log analysis
- 📖 README Parsing: Automatically understands program usage from README files
- 🔍 Smart Log Analysis: AI detects success/failure patterns in command output
- 🚨 Security: Built-in command validation and sandboxing
- ⚡ Fast Execution: Efficient command execution with context management
🔧 Configuration Options
config.yaml Structure
# OpenAI Configuration
openai:
api_key: "your-api-key"
model: "gpt-4"
max_tokens: 1000
temperature: 0.7
# Security Settings
security:
allowed_commands: ["git", "docker", "npm", "python"]
blocked_commands: ["rm", "sudo", "chmod"]
sandbox_mode: true
# Logging
logging:
level: "INFO"
file: "mcp_server.log"
# Database
database:
url: "sqlite:///mcp_server.db"
🤝 API Reference
Core Methods
execute_command(prompt, readme_path)- Execute command with README contextrecommend_flags(command, context)- Get AI flag recommendationsanalyze_logs(log_content, command)- Analyze command outputparse_readme(file_path)- Extract command info from README
🐛 Troubleshooting
Common Issues
- ImportError: Make sure you installed with
pip install -e . - OpenAI API Error: Check your API key in config.yaml
- Permission Denied: Ensure proper file permissions for execution
- Database Error: Check if mcp_server.db exists and is writable
Debug Mode
# Run with debug logging
DEBUG=1 python -m mcp_server.main
📈 Performance Tips
- Use SQLite database for caching README analysis results
- Enable OpenAI response caching for repeated queries
- Use async execution for multiple commands
🔐 Security Notes
- Commands are validated before execution
- Sensitive commands are blocked by default
- All executions are logged for audit
- README files are parsed safely without code execution
📚 Documentation
- PRD:
mcp_server_prd.md- Product Requirements Document - Architecture:
mcp_server_system_design.md- System Design - Diagrams:
*.mermaidfiles - UML diagrams
🤝 Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Submit a pull request
📄 License
MIT License - see LICENSE file for details
Happy coding! 🎉
For more examples and advanced usage, check the example_usage.py file.