NicoSartor/mcp-educational-rig
If you are the rightful owner of mcp-educational-rig and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
The Model Context Protocol (MCP) Test Rig v1 is a comprehensive server/client setup designed to demonstrate the capabilities of MCP, including AI model integration and tool development.
MCP Test Rig v1
A comprehensive Model Context Protocol (MCP) test rig with server/client capabilities, AI model switching, and educational examples. This project demonstrates how to build MCP servers and clients using the latest SDK, integrate AI models, and create tools that can be discovered and used by MCP clients.
📋 Table of Contents
- 🚀 Features
- 📚 What is MCP?
- 🛠️ Prerequisites
- 🚀 Quick Start
- 🔧 Available Commands
- 🛠️ Available Tools
- 📚 Educational Resources
- 🧪 Testing
- 🏗️ Project Structure
- 🔍 How It Works
- 🎯 Learning Objectives
- 🚧 Development
- 🛠️ Extending the Test Rig
- 🔒 Security Considerations
- 🚨 Troubleshooting Common Issues
- 🤝 Contributing
- 📄 License
- 📁 Project Files
- 🙏 Acknowledgments
- 📞 Support
🚀 Features
- MCP Server: Full-featured server with tools, resources, and prompts
- AI Integration: Support for OpenAI, Google Gemini, and Anthropic Claude
- Interactive Client: Command-line interface for exploring MCP capabilities
- Educational Tools: Text summarization and weather information tools
- Comprehensive Logging: Detailed logging for learning and debugging
- Type Safety: Full TypeScript implementation with strict typing
- Testing: Jest-based test suite following TDD principles
📚 What is MCP?
The Model Context Protocol (MCP) is a standard for connecting AI models with external data sources and tools. It enables:
- Tool Integration: AI models can use external tools and APIs
- Resource Access: Models can retrieve and work with various data sources
- Prompt Management: Structured prompts for consistent AI interactions
- Standardization: Common interface for different AI applications
Key Components
- Tools: Executable functions that AI models can call
- Resources: Data sources that models can access
- Prompts: Templates for AI interactions
- Transport: Communication layer between clients and servers
🛠️ Prerequisites
- Node.js 18+
- npm or yarn
- API keys for AI services (OpenAI, Gemini, Claude)
- Weather API key (optional)
🚀 Quick Start
1. Clone and Install
git clone <repository-url>
cd mcp-test-rig-v1
npm install
2. Configure Environment
Copy the example environment file and add your API keys:
cp env.example .env
Edit .env with your API keys:
# AI Model API Keys - Set at least one to enable AI capabilities
OPENAI_API_KEY=your_openai_api_key_here
GEMINI_API_KEY=your_gemini_api_key_here
CLAUDE_API_KEY=your_claude_api_key_here
# Weather API Key (for weather tool) - Get from https://openweathermap.org/api
WEATHER_API_KEY=your_weather_api_key_here
# Optional: Set default model preference
DEFAULT_MODEL=openai
3. Build the Project
npm run build
4. Start the Server
npm run server:start
5. Start the Client (in another terminal)
npm run client:start
🔧 Available Commands
Client Commands
help- Show help informationtools- List available toolsresources- List available resourcesprompts- List available promptsai-status- Show AI provider statusai-switch <provider>- Switch AI providerexecute <tool>- Execute a toolstatus- Show client statusclear- Clear the screenexit- Exit the client
AI Model Management
The client automatically detects available AI providers from your environment variables:
- OpenAI: Set
OPENAI_API_KEYfor GPT models - Google Gemini: Set
GEMINI_API_KEYfor Gemini models - Anthropic Claude: Set
CLAUDE_API_KEYfor Claude models
If only one provider is configured, it's automatically selected. If multiple are available, use ai-switch <provider> to choose.
🛠️ Available Tools
1. Text Summarizer (summarize)
AI-powered text summarization with configurable styles:
// Example usage
await client.executeTool('summarize', {
text: 'Long text to summarize...',
maxLength: 200,
style: 'concise', // 'concise', 'detailed', or 'bullet_points'
language: 'English',
});
Features:
- Multiple summary styles (concise, detailed, bullet points)
- Configurable length limits
- Multi-language support
- Comprehensive logging
2. Weather Tool (getWeather)
Get current weather information for any location:
// Example usage
await client.executeTool('getWeather', {
location: 'New York, NY',
units: 'metric', // 'metric' or 'imperial'
includeForecast: true,
});
Features:
- Current weather conditions
- Optional 3-day forecast
- Metric/imperial units
- Location caching for performance
📚 Educational Resources
The server provides educational resources about MCP development:
- MCP Overview: Introduction to the Model Context Protocol
- Tool Development Guide: How to create MCP tools
- AI Integration Guide: Integrating AI models with MCP
Understanding MCP Resources
Resources are data elements that MCP servers expose to clients, providing context for language model interactions. They can include:
- Documentation: Guides, tutorials, and reference materials
- Configuration Data: Settings, parameters, and system information
- Reference Materials: Code examples, best practices, and standards
- Knowledge Bases: Structured information for AI models to reference
File Format Flexibility: Resources can be stored in any format that contains natural language content:
.md(Markdown) - Human-readable with formatting.json- Structured data with metadata.txt- Plain text content.html- Rich formatted content- Any other format containing natural language
Since these are consumed by NLP models, the format is flexible as long as the content is accessible and processable by the server.
Understanding MCP Prompts
Prompts are predefined templates that guide interactions with language models. They provide:
- Structured Instructions: Consistent frameworks for AI interactions
- Dynamic Arguments: Parameterized templates for different use cases
- Context Integration: Ability to include resource content and tool outputs
- Workflow Facilitation: Step-by-step guidance for complex tasks
File Format Flexibility: Like resources, prompts can be stored in any natural language format:
.md(Markdown) - Structured templates with formatting.json- Parameterized prompt structures.txt- Simple text templates.yaml- Configuration-driven prompts- Any format containing natural language instructions
Key Benefits:
- Consistency: Ensures uniform AI interactions across applications
- Reusability: Templates can be shared and modified
- Maintainability: Easy to update without code changes
- Educational: Provides examples for developers learning MCP
Current Resources and Prompts
The server automatically loads resources and prompts from the src/resources/ and src/prompts/ directories:
Available Resources:
mcp-overview.md- Comprehensive MCP introductionweather-api-guide.md- Weather API integration guideai-model-comparison.md- AI model selection guide
Available Prompts:
text-summarization.md- Text summarization templateweather-analysis.md- Weather data interpretation templateai-model-selection.md- AI model selection guidance template
All content is dynamically loaded and exposed through MCP tools, making it easy to extend and customize without modifying server code.
🧪 Testing
Run the test suite:
# Run all tests
npm test
# Run tests in watch mode
npm run test:watch
# Run linting
npm run lint
# Fix linting issues
npm run lint:fix
Test Coverage
The project includes comprehensive tests for:
- AI Model Manager functionality
- Tool execution and validation
- MCP server/client communication
- Error handling and edge cases
🏗️ Project Structure
mcp-test-rig-v1/
├── src/
│ ├── ai/ # AI model management
│ │ └── AIModelManager.ts
│ ├── client/ # MCP client implementation
│ │ └── MCPClient.ts
│ ├── server/ # MCP server implementation
│ │ └── MCPServer.ts
│ ├── tools/ # MCP tools
│ │ ├── TextSummarizer.ts
│ │ └── WeatherTool.ts
│ ├── types/ # TypeScript type definitions
│ │ └── index.ts
│ ├── client.ts # Main client entry point
│ └── server.ts # Main server entry point
├── __tests__/ # Test files
├── dist/ # Compiled JavaScript
├── package.json # Project dependencies
├── tsconfig.json # TypeScript configuration
├── jest.config.js # Jest testing configuration
└── .eslintrc.js # ESLint configuration
📁 Path Aliases
This project uses TypeScript path aliases for clean, maintainable imports:
@ai/*→src/ai/*(AI components)@client/*→src/client/*(Client components)@server/*→src/server/*(Server components)@tools/*→src/tools/*(MCP tools)@types→src/types(Type definitions)@prompts/*→src/prompts/*(Prompt templates)@resources/*→src/resources/*(Educational resources)
Example: Instead of ../../src/ai/AIModelManager, use @ai/AIModelManager
See for complete documentation.
🔍 How It Works
1. Server Initialization
The MCP server starts up and:
- Initializes the AI model manager
- Registers available tools
- Sets up educational resources and prompts
- Establishes stdio transport for client communication
2. Client Connection
The client connects to the server and:
- Discovers available capabilities
- Provides interactive command interface
- Manages AI model switching
- Executes tools and displays results
3. Tool Execution
When a tool is executed:
- Client sends tool call to server
- Server validates input parameters
- Tool executes with AI integration if needed
- Results are formatted and returned to client
4. AI Model Integration
The AI model manager:
- Automatically detects available providers
- Provides unified interface for different AI services
- Handles provider switching
- Manages API calls and error handling
🎯 Learning Objectives
This project demonstrates:
- MCP Protocol Implementation: How to build MCP servers and clients
- Tool Development: Creating and registering MCP tools
- AI Integration: Managing multiple AI providers
- Error Handling: Comprehensive error handling and logging
- Type Safety: TypeScript best practices for MCP development
- Testing: TDD approach with comprehensive test coverage
🚧 Development
Adding New Tools
- Create a new tool class implementing the
MCPToolinterface - Add the tool to the server's tool initialization
- Write tests for the new tool
- Update documentation
Adding New AI Providers
- Extend the
AIModelManagerclass - Add provider-specific API calls
- Update environment variable handling
- Add tests for the new provider
Building and Running
# Development mode
npm run server:dev # Start server with tsx
npm run client:dev # Start client with tsx
# Production mode
npm run build # Compile TypeScript
npm run server:start # Start compiled server
npm run client:start # Start compiled client
# Both server and client
npm run dev
🛠️ Extending the Test Rig
This MCP test rig is designed to be easily extensible. Here's a quick guide to add your own tools and capabilities:
Adding New Tools
- Create Tool Class: Create a new file in
src/tools/implementing theMCPToolinterface:
import { MCPTool, ToolResult } from '@types';
import { z } from 'zod';
export class MyCustomTool implements MCPTool {
name = 'myCustomTool';
description = 'Description of what your tool does';
// Define input schema using Zod
inputSchema = z.object({
parameter1: z.string().describe('First parameter description'),
parameter2: z.number().optional().describe('Optional second parameter'),
});
async execute(input: z.infer<typeof this.inputSchema>): Promise<ToolResult> {
// Your tool logic here
const result = await this.performCustomOperation(input);
return {
content: `Tool executed successfully: ${result}`,
isError: false,
};
}
private async performCustomOperation(input: any): Promise<string> {
// Implement your custom logic
return `Processed: ${input.parameter1}`;
}
}
- Register Tool: Add your tool to the server in
src/server/MCPServer.ts:
// In the initializeTools() method
this.tools.set('myCustomTool', new MyCustomTool());
- Add Tests: Create tests in
__tests__/tools/MyCustomTool.test.ts:
import { MyCustomTool } from '@tools/MyCustomTool';
describe('MyCustomTool', () => {
let tool: MyCustomTool;
beforeEach(() => {
tool = new MyCustomTool();
});
test('should execute successfully', async () => {
const result = await tool.execute({ parameter1: 'test' });
expect(result.isError).toBe(false);
expect(result.content).toContain('Processed: test');
});
});
Adding New AI Providers
- Extend AIModelManager: Add your provider to
src/ai/AIModelManager.ts:
// Add to the AIModelProvider union type
export type AIModelProvider = 'openai' | 'gemini' | 'claude' | 'myCustomProvider';
// Add provider configuration
private async initializeMyCustomProvider(): Promise<void> {
const apiKey = process.env.MY_CUSTOM_API_KEY;
if (apiKey) {
this.addProvider({
provider: 'myCustomProvider',
apiKey,
model: 'my-custom-model'
});
this.logger.info('✅ My Custom Provider configured');
}
}
- Implement Provider Logic: Add the actual API calls in the
generateTextmethod.
Adding New Resources
- Create Resource File: Add a new
.mdfile tosrc/resources/:
# My Custom Resource
## About This Resource
**File Format**: This resource is stored as a Markdown (`.md`) file.
**Purpose**: Description of what this resource provides.
**Usage**: How AI models can use this resource.
## Content
Your educational content here...
- The server automatically loads it - no code changes needed!
Adding New Prompts
- Create Prompt File: Add a new
.mdfile tosrc/prompts/:
# My Custom Prompt Template
## Purpose
Description of what this prompt template does.
## Template
You are a helpful assistant. Please help with: {{task}}
Context: {{context}} Requirements: {{requirements}}
## Arguments
- `task`: The main task to accomplish
- `context`: Background information
- `requirements`: Specific requirements
- The server automatically loads it - no code changes needed!
Best Practices
- Follow Existing Patterns: Use the same structure as existing tools
- Add Comprehensive Tests: Ensure your new functionality is well-tested
- Update Documentation: Add examples to the README
- Use TypeScript: Leverage the type system for better code quality
- Handle Errors Gracefully: Implement proper error handling and logging
- Use Path Aliases: Import using
@tools/,@types/, etc.
Example: Calculator Tool
Here's a complete example of adding a simple calculator tool:
// src/tools/CalculatorTool.ts
import { MCPTool, ToolResult } from '@types';
import { z } from 'zod';
export class CalculatorTool implements MCPTool {
name = 'calculate';
description = 'Perform basic mathematical calculations';
inputSchema = z.object({
expression: z.string().describe('Mathematical expression to evaluate'),
precision: z.number().min(0).max(10).optional().default(2),
});
async execute(input: z.infer<typeof this.inputSchema>): Promise<ToolResult> {
try {
const result = this.evaluateExpression(input.expression);
const roundedResult = Number(result.toFixed(input.precision));
return {
content: `${input.expression} = ${roundedResult}`,
isError: false,
};
} catch (error) {
return {
content: `Error evaluating expression: ${error.message}`,
isError: true,
};
}
}
private evaluateExpression(expression: string): number {
// Simple expression evaluator (use a library like mathjs in production)
return eval(expression); // Note: eval is used for simplicity in this example
}
}
This extensible architecture makes it easy to experiment with MCP concepts and build your own AI-powered tools!
🔒 Security Considerations
- API keys are loaded from environment variables
- Input validation using Zod schemas
- Error messages don't expose sensitive information
- Rate limiting for external API calls
- Secure transport using stdio
🚨 Troubleshooting Common Issues
Rate Limiting (HTTP 429 "Too Many Requests")
If you encounter AxiosError: Request failed with status code 429, this means you've hit OpenAI's rate limits. The MCP Test Rig now includes automatic handling:
What Happens Automatically:
- ✅ Retry Logic: Automatically retries up to 3 times with exponential backoff
- ✅ Rate Limiting: Enforces minimum 1-second intervals between requests
- ✅ Smart Delays: Respects OpenAI's
Retry-Afterheaders when provided - ✅ Graceful Degradation: Provides clear error messages after retries fail
Manual Solutions:
- Wait and Retry: Wait a few minutes before trying again
- Check Your Quota: Verify your OpenAI account usage and limits
- Reduce Request Frequency: Don't send multiple requests simultaneously
- Upgrade Plan: Consider upgrading to a higher tier for increased limits
Environment Configuration:
# Adjust rate limiting (in milliseconds)
MIN_REQUEST_INTERVAL=2000 # 2 seconds between requests
# OpenAI specific settings
OPENAI_MODEL=gpt-5-mini
OPENAI_MAX_TOKENS=1000
OPENAI_TEMPERATURE=0.7
Authentication Errors (HTTP 401)
- Verify your API key is correct and active
- Check that your OpenAI account has sufficient credits
- Ensure the API key has the necessary permissions
Server Errors (HTTP 5xx)
- These are automatically retried with exponential backoff
- If persistent, check OpenAI's status page for service issues
🤝 Contributing
We welcome contributions from the community! This project is open source and follows best practices for collaborative development.
Quick Start
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes
- Add tests for new functionality
- Ensure all tests pass (
npm test) - Submit a pull request
Contribution Guidelines
- Code Quality: Follow TypeScript best practices and existing code style
- Testing: Write tests for new functionality and ensure all tests pass
- Documentation: Update README and add inline comments where helpful
- Commits: Use conventional commit messages (e.g.,
feat: add new tool) - Pull Requests: Provide clear descriptions and reference related issues
Development Setup
# Fork and clone
git clone https://github.com/your-username/mcp-test-rig-v1.git
cd mcp-test-rig-v1
# Install dependencies
npm install
# Run tests
npm test
# Run linting
npm run lint
# Build project
npm run build
For detailed contribution guidelines, see .
Community Standards
This project follows the . By participating, you agree to abide by its terms.
📄 License
This project is licensed under the MIT License - see the file for details.
The MIT License is a permissive open source license that allows others to:
- Use the code commercially
- Modify the code
- Distribute the code
- Use it privately
- Sublicense it
The only requirement is that the original license and copyright notice are included in any copy of the software/source.
📁 Project Files
Core Documentation
- - Project overview and setup guide
- - MIT License terms
- - Version history and release notes
- - Contribution guidelines
- - Community standards
- - Security policy and vulnerability reporting
Configuration Files
- - Dependencies and scripts
- - TypeScript configuration
- - Testing configuration
- - Code quality rules
- - Environment variables template
Development Documentation
- - TypeScript path alias usage guide
🙏 Acknowledgments
- Model Context Protocol for the MCP specification
- MCP SDK for the official implementation
- The MCP community for feedback and contributions
📞 Support
For questions or issues:
- Check the MCP documentation
- Review the test files for usage examples
- Check the comprehensive logging output
- Open an issue in the repository
Happy MCP Development! 🚀