simoalanne/SolitaProject-MCP-Server
If you are the rightful owner of SolitaProject-MCP-Server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
An MCP server for AI assistants to interact with the SolitaProject backend, enabling semi-automated QA testing of funding application evaluation systems.
SolitaProject-MCP-Server
An MCP (Model Context Protocol) server that provides tools for AI assistants to interact with the backend of SolitaProject. Built to experiment with creating MCP servers and to enable semi-automated QA testing of funding application evaluation systems using real-world data.
What This Does
This server exposes three tools that an AI assistant/agent (like Claude) can use:
| Tool | Purpose |
|---|---|
get_company_by_business_id | Look up company name using Finnish Business ID (Y-tunnus), Not really needed for current workflow(s) |
autocomplete_company_name | Search for companies by partial name, returns matching companies with their business IDs. Essential to turn company name(s) in press releases or other sources to official Business IDs |
assess_project_idea_of_consortium | Submit a project proposal for evaluation, returns detailed scoring and feedback that the agent can then analyze |
Why This Exists
Testing complex scoring systems like funding application evaluators is hard:
- Manual test data is slow - Writing realistic JSON test cases takes long time and is boring
- Invented data misses patterns - Made-up scenarios don't reflect real-world complexity
- Coverage is limited - A human tester brings their own blind spots
This MCP server enables a different approach:
- Take a real Business Finland press release (announcing actual funded projects)
- Let the AI extract structured data (company, funding amount, project description)
- Use the tools to look up real business IDs and financials
- Generate consortium variations with real partner companies
- Hit the assessment API and validate responses
The result: realistic test scenarios generated in minutes, not hours.
Example Workflow
User: "Analyze this press release: https://www.businessfinland.fi/en/whats-new/news/press-releases/2025/business-finland-grants-funding-to-iceye/
AI Assistant:
- Fetches and parses the press release
- Calls autocomplete_company_name("iceye") → gets business ID
- Looks up financial data from public sources
- Calls assess_project_idea_of_consortium with structured data
- Returns evaluation results and analysis it however you want
Setup
Prerequisites
- Node.js 22+ (typescript is run using native support, no transpiling required)
- The project assessment backend running on
localhost:3000. You can get this repo on https://github.com/simoalanne/SolitaProject - Alternatively it should also work if you set API_BASE_URL to be https://solitaproject.onrender.com/ in .env file
Installation
npm i
Usage
The server communicates via stdio, designed to be connected to an MCP-compatible client.
Configuration in Claude Desktop / VS Code
Use up to date instructions how to connect to each. Manual setup is the only thing officially supported
Tested personally using VS Code + Claude Opus 4.5
Tools Schema
{
generalDescription: string, // 20-400 chars, project overview
consortium: [
{
businessId: string, // Finnish Y-tunnus format (1234567-8)
budget: number, // Company's total project budget (€)
requestedFunding: number, // Requested from Business Finland (€)
isStartupOrRDDriven: boolean, // Affects financial risk evaluation
projectRoleDescription: string, // 20-200 chars, company's role
displayName?: string, // Optional, for readable output
financialData?: { // Optional, last 5 years
revenues: number[],
profits: number[]
}
}
]
}
Use Cases
- QA Testing: Generate diverse test scenarios from real funding announcements
- Consistency Checking: Verify similar inputs produce similar scores
- Edge Case Discovery: Test unusual consortium compositions
- Rule Validation: Confirm scoring rules fire correctly
Limitations
- Financial data needs web scraping from public sources like kauppalehti
- Testing consortiums is harder and may require making up data in addition to whats available in a press release or similar source
- Requires cloning and running a separate repo locally or If targeting the deployed app, having to wait for cold starts due to the app being on free tier