micro-agent/mcp-snippets-server
If you are the rightful owner of mcp-snippets-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
An MCP (Model Context Protocol) server that provides semantic search capabilities for code snippets using RAG (Retrieval-Augmented Generation) and vector embeddings.
MCP Snippets Server
An MCP (Model Context Protocol) server that provides semantic search capabilities for code snippets using RAG (Retrieval-Augmented Generation) and vector embeddings.
Overview
This server processes Markdown files containing code snippets, creates vector embeddings for semantic search, and provides an MCP tool for finding relevant code snippets based on topic queries.
Features
- Vector Store: Creates and manages a persistent vector store from Markdown documentation
- Semantic Search: Uses OpenAI-compatible embeddings to find relevant snippets
- MCP Integration: Exposes search functionality as an MCP tool
- Automatic Processing: Processes
.mdfiles on first run and stores embeddings - Persistent Storage: Saves vector store to JSON for quick subsequent startups
Architecture
The server consists of several key components:
- Vector Store (
rag/): Handles embedding storage and similarity search - Helpers (
helpers/): File processing utilities - Snippets (
snippets/): Contains code snippet documentation in Markdown format - MCP Server: Exposes the
search_snippettool via HTTP
Configuration
Set the following environment variables (or use .env file):
MODEL_RUNNER_BASE_URL: OpenAI-compatible API endpoint (default:http://localhost:12434/engines/llama.cpp/v1/)EMBEDDING_MODEL: Embedding model name (default:ai/mxbai-embed-large:latest)JSON_STORE_FILE_PATH: Vector store file path (default:rag-memory-store.json)MCP_HTTP_PORT: HTTP server port (default:9090)LIMIT: Similarity threshold (default:0.6)MAX_RESULTS: Maximum search results (default:2)
Usage
Starting the Server
go run main.go
The server will:
- Load existing vector store or create new one from
.mdfiles - Start HTTP server on the configured port
- Expose MCP endpoint at
/mcp
MCP Tool
The server provides one MCP tool:
search_snippet: Find code snippets related to a topic- Parameter:
topic(string) - Search query or question
- Parameter:
Example Tool Call
{
"method": "tools/call",
"params": {
"name": "search_snippet",
"arguments": {
"topic": "how to create a REST API in Go"
}
}
}
Development
File Structure
main.go: Main server implementationrag/: Vector store and similarity search logichelpers/: File processing utilitiessnippets/: Code snippet documentationstore/: Persistent vector store data
Adding New Snippets
- Add Markdown files to the
snippets/directory - Use
----------as delimiter between different snippets - Restart the server to reprocess and update embeddings
Dependencies
github.com/mark3labs/mcp-go: MCP server implementationgithub.com/openai/openai-go/v2: OpenAI API client for embeddingsgithub.com/joho/godotenv: Environment variable managementgithub.com/google/uuid: UUID generation
Docker Support
The project includes Docker configuration for easy deployment.
Use the Docker Image
Image: https://hub.docker.com/repository/docker/k33g/mcp-snippets-server/tags
- In a directory, create a
snippetsfolder and add your.mdfiles with code snippets. - Create a
compose.ymlfile with the following content:
services:
mcp-snippets-server:
image: k33g/mcp-snippets-server:0.0.5
ports:
- 9090:6060
environment:
- MCP_HTTP_PORT=6060
- LIMIT=0.6
- MAX_RESULTS=2
- JSON_STORE_FILE_PATH=store/rag-memory-store.json
- DELIMITER=----------
volumes:
- ./snippets:/app/snippets
- ./store:/app/store
models:
mxbai-embed:
endpoint_var: MODEL_RUNNER_BASE_URL
model_var: EMBEDDING_MODEL
models:
mxbai-embed:
model: ai/mxbai-embed-large:latest
Start the server with:
docker compose up