TMorville/wyndle
If you are the rightful owner of wyndle and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
The Model Context Protocol (MCP) server is designed to integrate AI capabilities into various platforms, providing intelligent insights and automation.
🌿 Wyndle - Your Slack Conversation Assistant
"I must be helpful! It is very important that I be helpful!"
Transform your Slack data into an intelligent personal assistant using AI and high-performance DuckDB storage.
✨ Features
🧠 AI-Powered Intelligence
- Smart Summaries: "Nothing urgent from Emil! He wishes you good holiday" instead of raw message dumps
- Priority Management: Automatically identifies what needs your attention with urgency levels
- Relationship Analysis: Understand conversation dynamics and outcomes across all contexts
⚡ High-Performance Data Pipeline
- DuckDB Storage: Columnar database for sub-millisecond query performance
- Continuous Sync: Background loader with intelligent rate limiting (respects Slack's 45 calls/minute)
- Human-Readable: All data uses real names instead of cryptic IDs
🔗 MCP Integration
- Model Context Protocol: Works seamlessly with Raycast, OpenAI, and other AI tools
- Natural Queries: Ask "What needs my attention?" and get actionable insights
- Configurable: Smart bot filtering and customizable response styles
🚀 Quick Start
Prerequisites
- Python 3.10+
- UV package manager (recommended) or pip
- Slack workspace with bot token
Installation
# Clone the repository
git clone https://github.com/yourusername/wyndle.git
cd wyndle
# Install dependencies
uv sync
# Configure your setup
cp config/config.example.yaml config/config.yaml
cp .env.example .env
# Edit your configuration files
# - Add your Slack API tokens to .env
# - Configure channels to monitor in config/config.yaml
⚠️ Important: Make sure to set your
SLACK_USER_KEYandOPENAI_API_KEYin the.envfile before running the data loader. The database schema will be created automatically on first run.
Configuration
-
Slack Bot Setup: Create a Slack app with these scopes:
channels:history, groups:history, im:history, mpim:history users:read, channels:read -
Environment Variables: Add to
.env:SLACK_USER_KEY=xoxp-your-user-token-here -
Channel Selection: Edit
config/config.yaml:slack: channels: - general - engineering - data-team ignored_bots: - slackbot - github - jira
Usage
# One-time data load
uv run wyndle-pipeline --dataloader
# Start continuous background sync daemon
uv run wyndle-loader start --workers 5
# Check daemon status
uv run wyndle-loader status
# Stop daemon
uv run wyndle-loader stop
# Restart daemon with different worker count
uv run wyndle-loader restart --workers 3
# Launch MCP server for AI integration
uv run wyndle-server
# View database statistics
uv run wyndle-pipeline --database-stats
🎯 Raycast Integration
Setup Instructions
-
Install MCP Extension in Raycast
- Open Raycast
- Go to Extensions → Store
- Search for "MCP" and install the MCP extension
-
Install MCP Server
- Complete the installation steps above first
- Make sure you have your
.envfile configured with API keys
-
Add Wyndle to Raycast
- Open Raycast MCP settings
- Click "Add MCP Server"
- Command:
<your-path-to-repo>/scripts/run-wyndle.sh - Name:
Wyndle - Save the configuration

Usage
Once configured, you can interact with Wyndle directly in Raycast:
@Wyndle please look at my squad channel, make a list of action points for me and add them as todo's with @todoist
@Wyndle anything to follow up on today?
@Wyndle summarize my latest interaction with Emil B
@Wyndle show me what's happening in the data team channel this week
Wyndle will provide intelligent, context-aware responses and can integrate with other Raycast extensions like Todoist for task management.
💬 AI Integration Examples
With Raycast
"What needs my attention?"
→ 🔥 **Urgent**: Sarah needs feedback on design mockups
→ 📅 **This Week**: Confirm availability for bi-weekly data meetings
"Summarize my interaction with John"
→ ✅ **Project complete** - John thanked you for the pipeline work and signed off
With OpenAI/Claude
The MCP server provides these intelligent tools:
content_get_user_interactions()- Relationship summariesproductivity_list_followups()- Action items and prioritiescontent_get_channel_activity()- Team/project intelligence
🏗️ Architecture
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Slack API │───▶│ DuckDB │───▶│ MCP Server │
│ │ │ (Columnar) │ │ (FastMCP) │
│ • Rate Limited │ │ • Human Names │ │ • AI Assistant │
│ • Continuous │ │ • Sub-ms Query │ │ • Smart Filter │
│ • Smart Sync │ │ • 70% Smaller │ │ • Raycast Ready │
└─────────────────┘ └─────────────────┘ └─────────────────┘
Key Components
src/ingest/- Continuous data pipeline with rate limitingsrc/data/- DuckDB storage layer with name resolutionsrc/server.py- Personal assistant MCP serversrc/cli.py- Command-line interfacesrc/slack_client/- Slack API integrationsrc/analysis/- Writing style and organizational analysis
🧪 Development
Running Tests
# Run minimal test suite
uv run pytest tests/
# Type checking
uv run mypy src/
# Code quality
uv run ruff check .
uv run ruff format .
🤝 Contributing
We welcome contributions! Please see our for details.
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes with tests
- Run the test suite (
uv run pytest) - Submit a pull request
📝 License
This project is licensed under the MIT License - see the file for details.
🙏 Acknowledgments
- Built with FastMCP for Model Context Protocol integration
- Powered by DuckDB for high-performance analytics
- Inspired by the need for intelligent Slack data management
📊 Performance
- Query Speed: Sub-millisecond response times with DuckDB columnar storage
- Memory Usage: ~300MB constant footprint with efficient compression
- Storage: 70% smaller than equivalent SQLite databases
- Rate Limiting: Respects Slack's API limits with intelligent backoff
Made with ❤️ for productive teams everywhere