dilip8700/ai-coding-mcp-server
If you are the rightful owner of ai-coding-mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
The MCP Coding Server is a production-ready server that facilitates interaction between AI models and system functions using the Model Context Protocol.
🤖 MCP Coding Server
A production-ready Model Context Protocol (MCP) server that provides comprehensive tools for AI models to interact with your system.
🚀 What is MCP?
MCP (Model Context Protocol) is a standard protocol that allows AI models to call functions on your system. This server provides tools for:
- File Operations: Read, write, search files
- System Commands: Execute terminal commands safely
- Web Scraping: Extract data from websites
- Code Analysis: Analyze and format code
- Git Operations: Version control commands
- Database Operations: SQL queries and commands
- AI Integration: Generate and analyze content
📋 Features
🔧 Core Tools
File Operations
file_read- Read file contentsfile_write- Write content to filesfile_search- Search for files by patternfile_list- List directory contentsfile_search_content- Search text within filesfile_info- Get file information
System Operations
system_info- Get system informationsystem_command- Execute terminal commandssystem_package- Install Python packages
Web Operations
web_scrape- Scrape webpage contentweb_api- Make HTTP API calls
Code Operations
code_analyze- Analyze code for issuescode_format- Format code
Git Operations
git_status- Check git statusgit_commit- Commit changesgit_push- Push to remote
Database Operations
db_query- Execute SQL queriesdb_execute- Execute SQL commands
AI Operations
ai_generate- Generate code/textai_analyze- Analyze content
🛠️ Installation
Prerequisites
- Python 3.8 or higher
- pip (Python package manager)
Quick Start
-
Clone or download the server files
# Make sure you have all the files in your directory ls -la -
Install dependencies
pip install -r requirements.txt -
Set up environment variables (optional)
export OPENAI_API_KEY="your-openai-api-key" export ANTHROPIC_API_KEY="your-anthropic-api-key" export GITHUB_TOKEN="your-github-token" -
Run the server
python server.py
🔧 Configuration
Environment Variables
| Variable | Description | Default |
|---|---|---|
OPENAI_API_KEY | OpenAI API key for AI tools | None |
ANTHROPIC_API_KEY | Anthropic API key for AI tools | None |
GITHUB_TOKEN | GitHub token for git operations | None |
MCP_SERVER_HOST | Server host | localhost |
MCP_SERVER_PORT | Server port | 8000 |
MCP_BASE_PATH | Base directory for file operations | Current directory |
MCP_WORKING_DIR | Working directory for commands | Current directory |
MCP_DATA_DIR | Data directory | ./data |
MCP_RATE_LIMIT | Rate limit per minute | 60 |
MCP_MAX_FILE_SIZE | Max file size in MB | 100 |
MCP_LOG_LEVEL | Logging level | INFO |
MCP_LOG_FILE | Log file path | mcp_server.log |
MCP_METRICS_ENABLED | Enable metrics collection | true |
Security Features
- Rate Limiting: Prevents abuse with configurable limits
- Command Validation: Blocks dangerous system commands
- Path Validation: Ensures files are within allowed directories
- File Size Limits: Prevents large file operations
- Extension Filtering: Only allows safe file types
📖 Usage
Starting the Server
# Basic start
python server.py
# With custom configuration
MCP_BASE_PATH=/path/to/workspace python server.py
# With logging
MCP_LOG_LEVEL=DEBUG python server.py
Connecting from AI Models
The server communicates via the MCP protocol. AI models can connect and call tools like this:
# Example tool call from AI
{
"method": "tools/call",
"params": {
"name": "file_read",
"arguments": {
"path": "main.py"
}
}
}
Tool Examples
Read a File
{
"name": "file_read",
"arguments": {
"path": "src/main.py",
"encoding": "utf-8"
}
}
Write a File
{
"name": "file_write",
"arguments": {
"path": "new_file.py",
"content": "print('Hello, World!')"
}
}
Execute Command
{
"name": "system_command",
"arguments": {
"command": "ls -la",
"timeout": 30
}
}
Search Files
{
"name": "file_search",
"arguments": {
"pattern": "*.py",
"recursive": true
}
}
Scrape Website
{
"name": "web_scrape",
"arguments": {
"url": "https://example.com"
}
}
🏗️ Architecture
Project Structure
mcp_coding_server/
├── server.py # Main MCP server
├── config.py # Configuration management
├── requirements.txt # Python dependencies
├── README.md # This file
├── tools/ # Tool implementations
│ ├── __init__.py
│ ├── file_tools.py # File operations
│ ├── system_tools.py # System commands
│ ├── web_tools.py # Web scraping
│ ├── code_tools.py # Code analysis
│ ├── git_tools.py # Git operations
│ ├── database_tools.py # Database operations
│ └── ai_tools.py # AI integration
└── utils/ # Utilities
├── __init__.py
├── logger.py # Logging setup
├── security.py # Security manager
└── metrics.py # Metrics collection
Core Components
- MCPServer: Main server class that handles MCP protocol
- Tool Classes: Individual tool implementations
- SecurityManager: Handles security and rate limiting
- MetricsCollector: Collects usage metrics
- Config: Manages configuration and environment variables
🔒 Security
Built-in Protections
- Command Blocking: Dangerous commands are automatically blocked
- Path Validation: All file operations are restricted to allowed directories
- Rate Limiting: Prevents abuse with configurable limits
- Input Sanitization: All inputs are sanitized
- Error Handling: Comprehensive error handling and logging
Blocked Commands
rm -rf /- Dangerous file deletionformat c:- Disk formattingsudo- Privilege escalationchmod 777- Dangerous permissions- And many more...
📊 Monitoring
Metrics Collection
The server automatically collects metrics including:
- Request counts
- Response times
- Error rates
- Tool usage statistics
Logging
Comprehensive logging to both console and file:
- Request/response logging
- Error tracking
- Security events
- Performance metrics
🚀 Production Deployment
Docker (Recommended)
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
EXPOSE 8000
CMD ["python", "server.py"]
Environment Setup
# Production environment variables
export MCP_LOG_LEVEL=WARNING
export MCP_METRICS_ENABLED=true
export MCP_RATE_LIMIT=30
export MCP_MAX_FILE_SIZE=50
🔧 Development
Adding New Tools
- Create a new tool class in
tools/ - Implement the
get_tools()method - Implement the
handle_tool_call()method - Register the tool in
server.py
Example Tool
class MyTool:
def get_tools(self) -> List[Tool]:
return [
Tool(
name="my_tool",
description="My custom tool",
inputSchema={
"type": "object",
"properties": {
"param": {"type": "string"}
}
}
)
]
async def handle_tool_call(self, tool_name: str, arguments: Dict[str, Any]) -> Dict[str, Any]:
if tool_name == "my_tool":
return await self._my_tool_function(arguments)
🐛 Troubleshooting
Common Issues
-
Import Errors: Make sure all dependencies are installed
pip install -r requirements.txt -
Permission Errors: Check file and directory permissions
chmod +x server.py -
Port Already in Use: Change the port in configuration
export MCP_SERVER_PORT=8001 -
API Key Issues: Verify your API keys are set correctly
echo $OPENAI_API_KEY
Debug Mode
MCP_LOG_LEVEL=DEBUG python server.py
📝 License
This project is licensed under the MIT License.
🤝 Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
📞 Support
For issues and questions:
- Check the troubleshooting section
- Review the logs in
mcp_server.log - Enable debug logging
- Create an issue with detailed information
Happy coding with your AI assistant! 🚀