spotify_llm

mrfelixwong/spotify_llm

3.3

If you are the rightful owner of spotify_llm and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.

The Spotify LLM Agent is a fully MCP compliant music assistant that integrates OpenAI's language models with Spotify's API through a client-server architecture.

Tools
5
Resources
0
Prompts
0

Spotify LLM Agent with MCP Compliance

A fully MCP (Model Context Protocol) compliant music assistant that combines OpenAI's language models with Spotify's API through a proper client-server architecture.

Features

  • Full MCP Compliance: Implements the actual MCP specification with JSON-RPC 2.0
  • Natural Language Interface: Convert natural language requests to Spotify API calls
  • Multiple Tools: Search artists, get top tracks, access playlists, and more
  • Backward Compatibility: Legacy endpoint still works for existing clients
  • Error Handling: Proper error responses and graceful failure handling

MCP Tools Available

ToolDescriptionParameters
search_artistSearch for an artist by nameartist (string)
get_artist_top_tracksGet top tracks for an artistartist_id (string)
search_artist_and_get_top_tracksSearch artist and get their top tracksartist (string)
get_user_top_tracksGet user's personal top trackslimit (integer, optional)
get_playlist_tracksGet tracks from a playlistplaylist_id (string)

Architecture

MCP Server (mcp_server.py)

  • FastAPI-based server implementing MCP specification
  • JSON-RPC 2.0 protocol compliance
  • Tool definitions with proper schemas
  • Error handling with standard MCP error codes

MCP Client (spotify_llm_agent.py)

  • OpenAI integration for natural language processing
  • MCP client class for proper protocol communication
  • Fallback support for legacy endpoint
  • Rich output formatting for user-friendly display

Quick Start

1. Setup Environment

# Install dependencies
pip install openai spotipy fastapi uvicorn requests python-dotenv

# Set up environment variables
cp .env.example .env
# Edit .env with your API keys

2. Start the MCP Server

uvicorn mcp_server:app --reload

Server will start on http://localhost:8000

3. Run the Client

python spotify_llm_agent.py

4. Test the MCP Implementation

python test_mcp.py

Usage Examples

Natural Language Commands

> Find top tracks by Taylor Swift
> What are my most listened songs?
> Search for artist Drake
> Show me tracks in playlist 37i9dQZF1DXcBWIGoYBM5M

Direct MCP Tool Calls

# Initialize MCP connection
mcp_client = MCPClient()
mcp_client.initialize()

# List available tools
tools = mcp_client.list_tools()

# Call a tool
result = mcp_client.call_tool("search_artist", {"artist": "Taylor Swift"})

MCP Protocol Implementation

JSON-RPC 2.0 Messages

Initialize Request:

{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "initialize",
  "params": {}
}

Tool Call Request:

{
  "jsonrpc": "2.0",
  "id": 2,
  "method": "tools/call",
  "params": {
    "name": "search_artist",
    "arguments": {"artist": "Taylor Swift"}
  }
}

Tool List Response:

{
  "jsonrpc": "2.0",
  "id": 3,
  "result": {
    "tools": [
      {
        "name": "search_artist",
        "description": "Search for an artist by name",
        "inputSchema": {...}
      }
    ]
  }
}

Backward Compatibility

The server maintains backward compatibility with the old /mcp/invoke endpoint:

# Legacy format still works
response = requests.post("http://localhost:8000/mcp/invoke", json={
    "action": "search_artist",
    "params": {"artist": "Taylor Swift"}
})

Testing

Run the test script to verify MCP compliance:

python test_mcp.py

This will test:

  • ✅ MCP initialization
  • ✅ Tool discovery
  • ✅ Tool execution
  • ✅ Error handling
  • ✅ Legacy endpoint compatibility

🔍 MCP vs Previous Implementation

FeaturePreviousMCP Compliant
ProtocolCustom HTTPJSON-RPC 2.0
Tool DiscoveryNonetools/list method
Error CodesCustomStandard MCP codes
Tool SchemasNoneJSON Schema definitions
Client IntegrationSimple HTTPFull MCP client
ExtensibilityLimitedHigh

Development

Adding New Tools

  1. Define the tool schema in MCP_TOOLS
  2. Create a handler function in the server
  3. Register the handler in TOOL_HANDLERS
  4. Update the LLM prompt in the client

Example: Adding a new tool

# In mcp_server.py
MCP_TOOLS.append({
    "name": "search_tracks",
    "description": "Search for tracks by name",
    "inputSchema": {
        "type": "object",
        "properties": {
            "query": {"type": "string"}
        },
        "required": ["query"]
    }
})

def handle_search_tracks(arguments: Dict[str, Any]) -> Dict[str, Any]:
    # Implementation here
    pass

TOOL_HANDLERS["search_tracks"] = handle_search_tracks

Environment Variables

Required environment variables in .env:

SPOTIFY_CLIENT_ID=your_spotify_client_id
SPOTIFY_CLIENT_SECRET=your_spotify_client_secret
SPOTIFY_REDIRECT_URI=http://localhost:8888/callback
OPENAI_API_KEY=your_openai_api_key

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Add tests for new functionality
  4. Ensure MCP compliance
  5. Submit a pull request

License

MIT License - see LICENSE file for details.