githubcopilot-mcp-coding-agent

kkahol-toronto/githubcopilot-mcp-coding-agent

3.1

If you are the rightful owner of githubcopilot-mcp-coding-agent and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.

The Codedelite MCP Server is a Model Context Protocol server designed to enhance AI coding assistance by integrating coding standards and logging interactions for analysis.

Tools
2
Resources
0
Prompts
0

Codedelite MCP Server

A Model Context Protocol (MCP) server that enhances AI coding assistance by wrapping user queries with comprehensive coding standards and automatically logging request-response pairs for analysis and training data collection.

🚀 Features

  • Standards Enhancement: Automatically wraps user queries with your custom coding standards
  • Request-Response Logging: Captures complete conversation pairs for analysis
  • Session Tracking: Links requests and responses with unique session IDs
  • Metadata Collection: Tracks response metadata (model, tokens, timing, etc.)
  • Automatic Client Instructions: Provides clear instructions for AI clients to log responses back

📋 Table of Contents

🛠 Installation

Prerequisites

  • Python 3.11+
  • VS Code with MCP extension
  • Virtual environment (recommended)

Setup

  1. Clone or download the server files

    # Ensure you have the server.py file in your project directory
    
  2. Create and activate virtual environment

    python -m venv venv
    source venv/bin/activate  # On Windows: venv\Scripts\activate
    
  3. Install dependencies

    pip install fastmcp pydantic
    
  4. Test the server

    python server.py
    

⚙️ Configuration

VS Code MCP Configuration

Add the following to your VS Code mcp.json configuration file:

{
  "servers": {
    "codedelite": {
      "type": "stdio",
      "command": "/Users/kanavkahol/work/codeMCP/venv/bin/python",
      "args": ["-u", "/Users/kanavkahol/work/codeMCP/server.py"],
      "env": {
        "CODEDELITE_LOG_PATH": "/Users/kanavkahol/work/codeMCP/.codedelite/codedelite.log.jsonl"
      }
    }
  }
}

Important: Update the paths to match your actual installation directory.

Environment Variables

  • CODEDELITE_LOG_PATH: Path to the log file (default: ./codedelite.log.jsonl)

🎯 Usage

Basic Workflow

  1. User Query: User asks a coding question in VS Code
  2. Standards Enhancement: Server wraps the query with your coding standards
  3. AI Response: AI generates a response following the enhanced prompt
  4. Automatic Logging: Client automatically logs the response back to the server
  5. Data Collection: Complete request-response pairs are stored for analysis

Example Interaction

User Query: "what is pytest. give me a getting started guide."

Server Response: Enhanced prompt with standards + instructions to log response

AI Response: Comprehensive pytest guide

Automatic Logging: Response is logged back with metadata

🏗 Architecture

Core Components

┌─────────────────┐    ┌──────────────────┐    ┌─────────────────┐
│   VS Code       │    │  Codedelite      │    │   AI Client     │
│   (User)        │    │  MCP Server      │    │   (Copilot)     │
└─────────────────┘    └──────────────────┘    └─────────────────┘
         │                       │                       │
         │ 1. User Query         │                       │
         ├──────────────────────►│                       │
         │                       │                       │
         │                       │ 2. Enhanced Prompt    │
         │                       ├──────────────────────►│
         │                       │                       │
         │                       │ 3. AI Response        │
         │                       │◄──────────────────────┤
         │                       │                       │
         │                       │ 4. Log Response       │
         │                       │◄──────────────────────┤
         │                       │                       │
         │ 5. Final Response     │                       │
         │◄──────────────────────┤                       │

Tools Available

  1. generate: Enhances user queries with coding standards
  2. log_response: Captures AI responses for analysis

📊 Logging System

Log Structure

The server creates structured JSON logs with the following fields:

Request Logs
{
  "ts": "2025-09-04T17:47:11.302955+00:00",
  "version": "1.0.0",
  "tool": "codedelite.generate",
  "type": "request",
  "query": "user query here",
  "session_id": "unique-uuid",
  "augmented_prompt_len": 1054
}
Response Logs
{
  "ts": "2025-09-04T17:53:54.605861+00:00",
  "version": "1.0.0",
  "tool": "codedelite.log_response",
  "type": "response",
  "original_query": "original user query",
  "response": "AI generated response",
  "session_id": "matching-uuid",
  "response_metadata": {
    "model": "claude-3-5-sonnet",
    "tokens_used": 8500,
    "response_time_ms": 15000
  },
  "response_length": 6579
}

Log Fields Explained

FieldTypeDescription
tsstringISO timestamp of the event
versionstringServer version
toolstringTool that generated the log entry
typestring"request" or "response"
querystringOriginal user query
session_idstringUUID linking request-response pairs
responsestringAI-generated response (response logs only)
response_metadataobjectAdditional response information
response_lengthnumberCharacter count of response

📝 Example Logs

Here's a real example from the log file showing a complete request-response pair:

Request Log Entry

{
  "ts": "2025-09-04T17:47:11.302955+00:00",
  "version": "1.0.0",
  "tool": "codedelite.generate",
  "type": "request",
  "query": "what is pytest. give me a getting started guide.",
  "session_id": "e7953783-9425-43a2-b538-e6a41a81898c",
  "augmented_prompt_len": 1054
}

Response Log Entry

{
  "ts": "2025-09-04T17:53:54.605861+00:00",
  "version": "1.0.0",
  "tool": "codedelite.log_response",
  "type": "response",
  "original_query": "what is pytest. give me a getting started guide.",
  "response": "# Pytest Getting Started Guide\n\n## Plan\nI've provided a comprehensive pytest guide covering:\n1. **What pytest is** and its key advantages\n2. **Installation and setup** process\n3. **Core testing concepts** with practical examples\n4. **Advanced features** like fixtures and parametrization\n5. **Best practices** and testing patterns\n\n## What is Pytest?\n\n**Pytest** is a mature, feature-rich Python testing framework that makes it easy to write simple and scalable test cases. It's designed to support simple unit testing as well as complex functional testing for applications and libraries.\n\n### Key Advantages:\n- **Simple syntax**: Write tests with plain `assert` statements\n- **Auto-discovery**: Automatically finds and runs test files\n- **Rich plugin ecosystem**: Extensive third-party plugins\n- **Detailed failure reporting**: Clear, informative test failure messages\n- **Fixtures**: Powerful dependency injection system\n- **Parametrization**: Run tests with multiple input sets\n\n## Complete Implementation\n\nI've created a comprehensive pytest demonstration project with:\n\n### Core Files Created:\n1. **`pytest_demo/calculator.py`** - A feature-rich calculator class with proper error handling and documentation\n2. **`pytest_demo/test_calculator.py`** - Comprehensive test suite demonstrating all pytest features\n3. **`pytest_demo/test_advanced.py`** - Advanced testing patterns including mocking, fixtures, and performance testing\n4. **`pytest_demo/conftest.py`** - Shared fixtures and pytest configuration\n5. **`pyproject.toml`** - Project configuration with pytest settings\n6. **`PYTEST_GUIDE.md`** - Complete documentation and tutorial\n7. **`run_tests.py`** - Test runner script with different execution options\n\n### Key Features Demonstrated:\n\n#### 1. **Basic Testing Patterns**\n```python\ndef test_add_positive_numbers(calculator):\n    result = calculator.add(5, 3)\n    assert result == 8\n```\n\n#### 2. **Exception Testing**\n```python\ndef test_divide_by_zero_raises_value_error(calculator):\n    with pytest.raises(ValueError, match=\"Cannot divide by zero\"):\n        calculator.divide(10, 0)\n```\n\n#### 3. **Parametrized Tests**\n```python\n@pytest.mark.parametrize(\"a, b, expected\", [\n    (1, 2, 3),\n    (0, 0, 0),\n    (-1, 1, 0),\n])\ndef test_add_parametrized(calculator, a, b, expected):\n    result = calculator.add(a, b)\n    assert result == expected\n```\n\n#### 4. **Fixtures for Dependency Injection**\n```python\n@pytest.fixture\ndef calculator():\n    return Calculator()\n\n@pytest.fixture(scope=\"session\")\ndef temp_dir():\n    with tempfile.TemporaryDirectory() as tmp_dir:\n        yield Path(tmp_dir)\n```\n\n#### 5. **Test Markers and Categories**\n```python\n@pytest.mark.slow\ndef test_large_factorial():\n    result = factorial(20)\n    assert result == 2432902008176640000\n\n@pytest.mark.integration\ndef test_complex_calculation_chain(calculator):\n    # Test multiple operations together\n```\n\n### Live Demo Results:\n\nI executed the tests to show pytest in action:\n\n**✅ 56 tests passed** with detailed output showing:\n- Test discovery and execution\n- Clear pass/fail indicators\n- Coverage analysis (97% for calculator.py)\n- Parametrized test execution\n- Error reporting for intentional failures\n\n### Command Examples:\n\n```bash\n# Run all tests\npytest\n\n# Run with verbose output\npytest -v\n\n# Run specific test file\npytest test_calculator.py\n\n# Run tests matching pattern\npytest -k \"add\"\n\n# Run with coverage\npytest --cov=src --cov-report=html\n\n# Skip slow tests\npytest -m \"not slow\"\n\n# Run only unit tests\npytest -m \"unit\"\n```\n\n## Design Choices & Trade-offs\n\n1. **Comprehensive Calculator Example**: Chose a calculator as it's familiar yet complex enough to demonstrate all pytest features\n2. **Multiple Test Categories**: Organized tests into logical groups (basic, exceptions, parametrized, etc.)\n3. **Fixture Scoping**: Demonstrated different fixture scopes for optimal performance\n4. **Configuration**: Used pyproject.toml for modern Python project standards\n5. **Error Handling**: Included comprehensive exception testing\n\n## Unit Tests Included\n\nThe project includes extensive unit tests covering:\n- ✅ **56 test functions** across multiple test classes\n- ✅ **Basic operations** (add, subtract, multiply, divide)\n- ✅ **Edge cases** (infinity, NaN, very large numbers)\n- ✅ **Exception handling** with proper error messages\n- ✅ **Parametrized tests** with multiple input sets\n- ✅ **Fixture usage** for setup and teardown\n- ✅ **Mocking examples** for external dependencies\n- ✅ **History tracking** and state management\n- ✅ **Performance considerations** for slow operations\n\n### Running Instructions:\n```bash\n# Install dependencies\npip install pytest pytest-cov pytest-mock\n\n# Run basic tests\npython run_tests.py --basic\n\n# Run all tests with coverage\npython run_tests.py --all\n\n# Generate HTML reports\npython run_tests.py --reports\n```\n\n## Alternative Implementations\n\n### Alternative 1: Simpler Structure\n**Pros**: Easier to understand for beginners\n**Cons**: Less comprehensive, fewer advanced features\n\n```python\n# Simple approach\ndef test_addition():\n    assert add(2, 3) == 5\n\ndef test_division():\n    assert divide(10, 2) == 5\n```\n\n### Alternative 2: Class-based Testing (unittest style)\n**Pros**: Familiar to developers from other frameworks\n**Cons**: More verbose, less pythonic\n\n```python\nclass CalculatorTest(unittest.TestCase):\n    def setUp(self):\n        self.calc = Calculator()\n    \n    def test_addition(self):\n        self.assertEqual(self.calc.add(2, 3), 5)\n```\n\n### Alternative 3: Property-based Testing (Hypothesis)\n**Pros**: Tests more edge cases automatically\n**Cons**: Requires additional dependency, more complex\n\n```python\nfrom hypothesis import given, strategies as st\n\n@given(st.integers(), st.integers())\ndef test_addition_commutative(a, b):\n    assert add(a, b) == add(b, a)\n```\n\n## Next Steps\n\n1. **Explore Plugins**: pytest-django, pytest-asyncio, pytest-benchmark\n2. **CI/CD Integration**: Set up automated testing in GitHub Actions\n3. **Property-based Testing**: Add Hypothesis for more thorough testing\n4. **Performance Testing**: Use pytest-benchmark for performance regression testing\n5. **Test Coverage Goals**: Aim for >90% coverage with meaningful tests\n\n## Assumptions Made\n\n- Python 3.8+ environment\n- Familiarity with basic Python concepts\n- Interest in comprehensive testing practices\n- VS Code as development environment\n- Modern project structure preferences\n\nThe implementation demonstrates pytest's power while maintaining readability and educational value. Each test serves as both validation and documentation of expected behavior.",
  "session_id": "e7953783-9425-43a2-b538-e6a41a81898c",
  "response_metadata": {
    "model": "claude-3-5-sonnet",
    "tokens_used": 8500,
    "response_time_ms": 15000,
    "features_demonstrated": ["pytest_basics", "fixtures", "parametrization", "mocking", "coverage", "markers", "configuration"]
  },
  "response_length": 6579
}

Analysis of This Example

This log entry shows:

  • Request: User asked for a pytest getting started guide
  • Response: Comprehensive 6,579-character guide covering pytest basics to advanced features
  • Metadata:
    • Model: Claude 3.5 Sonnet
    • Tokens used: 8,500
    • Response time: 15 seconds
    • Features demonstrated: Multiple pytest concepts
  • Session Linking: Both entries share the same session ID for easy pairing

🔧 API Reference

Tools

generate

Enhances user queries with coding standards and provides instructions for response logging.

Input:

{
  "query": "string - The user's coding question"
}

Output:

{
  "augmented_prompt": "string - Enhanced prompt with standards",
  "suggested_copilot_message": "string - Complete message for AI client",
  "meta": {
    "version": "string",
    "server": "string", 
    "timestamp": "number",
    "session_id": "string - UUID for linking responses"
  }
}
log_response

Logs AI responses to create request-response pairs.

Input:

{
  "original_query": "string - Original user query",
  "response": "string - AI generated response",
  "session_id": "string - UUID from generate tool",
  "response_metadata": {
    "model": "string - AI model used",
    "tokens_used": "number - Token count",
    "response_time_ms": "number - Response time"
  }
}

Output:

{
  "success": "boolean - Whether logging succeeded",
  "message": "string - Status message",
  "meta": {
    "version": "string",
    "server": "string",
    "timestamp": "number",
    "log_entry_id": "string - Timestamp of log entry"
  }
}

🛠 Development

Project Structure

codeMCP/
├── server.py              # Main MCP server
├── venv/                  # Virtual environment
├── .codedelite/           # Log directory
│   └── codedelite.log.jsonl
└── README.md              # This file

Key Components

  1. FastMCP Server: Handles MCP protocol communication
  2. Pydantic Models: Type-safe data validation
  3. Logging System: Structured JSON logging
  4. Session Management: UUID-based request-response linking

Customization

Modifying Coding Standards

Edit the HOUSE_BLOCK constant in server.py:

HOUSE_BLOCK = """\
Please adhere to all the following standards and behaviors:

1) Code Quality & Linting
   - Follow idiomatic patterns and established style guides
   - Include docstrings/comments for complex logic
   - Keep functions cohesive; avoid excessive side effects

2) Unit Tests
   - Provide unit tests for all non-trivial functions
   - Use realistic fixtures/mocks; cover edge cases
   - Show clear instructions for running tests

# Add your custom standards here...
"""
Adding Response Metadata

The response_metadata field can include any custom data:

{
  "model": "gpt-4",
  "tokens_used": 1500,
  "response_time_ms": 2500,
  "custom_field": "custom_value",
  "user_feedback": "helpful",
  "code_quality_score": 9.5
}

Troubleshooting

Common Issues
  1. Import Errors: Ensure virtual environment is activated
  2. Permission Errors: Check log file path permissions
  3. MCP Connection: Verify VS Code MCP configuration paths
Debug Mode

Add debug logging by modifying the server:

import logging
logging.basicConfig(level=logging.DEBUG)

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Test thoroughly
  5. Submit a pull request

📈 Benefits

For Developers

  • Consistent Code Quality: Automatic application of coding standards
  • Learning Tool: See how standards improve AI responses
  • Training Data: Collect high-quality conversation pairs

For Teams

  • Standardization: Ensure all AI-assisted code follows team standards
  • Analytics: Track AI response quality and effectiveness
  • Improvement: Use logged data to refine prompts and standards

For Organizations

  • Compliance: Ensure AI-generated code meets organizational standards
  • Audit Trail: Complete record of AI interactions
  • Optimization: Data-driven improvement of AI coding assistance

🔮 Future Enhancements

  • Response Quality Scoring: Automatic quality assessment
  • Custom Standards per Project: Project-specific coding standards
  • Analytics Dashboard: Web interface for log analysis
  • Integration APIs: Connect with other development tools
  • Machine Learning: Use logged data to improve standards

📄 License

This project is open source. Feel free to modify and distribute according to your needs.

🤝 Support

For issues, questions, or contributions, please open an issue in the project repository.


Happy Coding with Enhanced AI Assistance! 🚀