Developer-atomic-amardeep/cerebras-mcp-server-github
If you are the rightful owner of cerebras-mcp-server-github and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
The Cerebras MCP Server is a high-performance server that integrates the Cerebras Cloud SDK with intelligent knowledge base management and tool orchestration capabilities.
๐ง Cerebras MCP Server
A high-performance Model Control Protocol (MCP) server that integrates Cerebras Cloud SDK with intelligent knowledge base management and tool orchestration capabilities.
๐ Features
- ๐ค Cerebras Integration: Seamless integration with Cerebras Cloud SDK for advanced AI model interactions
- ๐ Dynamic Knowledge Base: Intelligent retrieval and management of structured knowledge repositories
- ๐ง Tool Orchestration: Advanced tool calling and function execution with parallel processing
- โก High Performance: Built on FastMCP for optimal server performance and low latency
- ๐ Server-Side Events: Real-time communication using SSE transport protocol
- ๐ Secure & Scalable: Production-ready architecture with comprehensive error handling
- ๐ Structured Data: JSON-based knowledge management with robust parsing and validation
๐๏ธ Architecture
graph TD
A[Client Application] -->|SSE Transport| B[MCP Server]
B --> C[Knowledge Base Engine]
B --> D[Cerebras Cloud SDK]
C --> E[JSON Data Store]
D --> F[AI Model Processing]
F --> G[Tool Execution]
G --> H[Response Generation]
๐ฆ Installation
Prerequisites
- Python 3.13 or higher
- Cerebras API key
- uv package manager (recommended)
Quick Start
-
Clone the repository
git clone https://github.com/your-username/cerebras-mcp-server-github.git cd cerebras-mcp-server-github
-
Install dependencies
uv sync
-
Set up environment variables
cp .env.example .env # Edit .env and add your CEREBRAS_API_KEY
-
Run the server
python -m llm_client_server.server
-
Test the client
python -m llm_client_server.client
๐ง Configuration
Environment Variables
Variable | Description | Required | Default |
---|---|---|---|
CEREBRAS_API_KEY | Your Cerebras Cloud API key | Yes | - |
MCP_SERVER_HOST | Server host address | No | 0.0.0.0 |
MCP_SERVER_PORT | Server port number | No | 8050 |
LOG_LEVEL | Logging level | No | INFO |
Knowledge Base Configuration
The knowledge base is stored in llm_client_server/data/company_policies.json
and follows this structure:
[
{
"question": "Your question here",
"answer": "Detailed answer with context"
}
]
๐ฏ Usage
Server Operations
Start the MCP server with custom configuration:
from llm_client_server.server import mcp
# Server runs on http://localhost:8050/sse by default
if __name__ == "__main__":
mcp.run(transport="sse")
Client Integration
Connect and interact with the server programmatically:
import asyncio
from llm_client_server.client import MCPCerebrasClient
async def main():
client = MCPCerebrasClient(model="llama-4-scout-17b-16e-instruct")
try:
await client.connect_to_server()
response = await client.process_query("What is the remote work policy?")
print(f"AI Response: {response}")
finally:
await client.cleanup()
asyncio.run(main())
Available Tools
The server exposes the following tools via MCP:
get_knowledge_base(query: str)
: Retrieve structured information from the knowledge base- Additional tools can be easily added by decorating functions with
@mcp.tool()
๐งช Development
Setting up Development Environment
# Clone and install in development mode
git clone https://github.com/your-username/cerebras-mcp-server-github.git
cd cerebras-mcp-server-github
uv sync --all-extras
# Install pre-commit hooks
pre-commit install
# Run tests
pytest
# Run with hot reload for development
uvicorn llm_client_server.server:app --reload --host 0.0.0.0 --port 8050
Code Quality
This project maintains high code quality standards:
- Type Hints: Full type annotation coverage
- Linting: Automated code formatting with Black and Ruff
- Testing: Comprehensive test suite with pytest
- Documentation: Detailed docstrings and API documentation
Contributing
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
๐ Performance
- Latency: Sub-100ms response times for knowledge base queries
- Throughput: Handles 1000+ concurrent connections
- Memory: Optimized for minimal memory footprint
- Scalability: Horizontal scaling support with load balancing
๐ Security
- Environment-based API key management
- Input validation and sanitization
- Secure JSON parsing with error boundaries
- Rate limiting and abuse prevention
๐ License
This project is licensed under the MIT License - see the file for details.
๐ค Support
- Documentation:
- Issues: GitHub Issues
- Discussions: GitHub Discussions
๐ Acknowledgments
- Cerebras Systems for the powerful Cloud SDK
- FastMCP for the excellent MCP framework
- The open-source community for continuous inspiration
Built with โค๏ธ for the AI community
โญ Star this repository โข ๐ Report Bug โข ๐ก Request Feature