robopol/mcp-reasoning-booster
If you are the rightful owner of mcp-reasoning-booster and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
The Reasoning Booster MCP Server is a domain-agnostic pipeline designed to enhance reasoning tasks by iteratively generating, scoring, and applying candidate micro-steps.
Reasoning Booster MCP (concise, production-ready)
Reasoning Booster is an MCP server that implements a universal "reasoning booster" pipeline: it iteratively generates candidate micro‑steps (via LLM sampling — MCP or direct HTTP; if no keys are present, it uses diversified heuristics), scores them with a verifier, applies the best step, and finally returns a concise summary. The booster is an idea generator; the arbiter (your AI/agent) composes the final answer from the produced steps.
What it provides:
- Micro‑steps (≤ 200 chars) with
rationaleandhow_to_verify; optionalverification.outcomes(VoI/IG). - Scoring: brevity/concreteness, information gain (entropy), novelty vs. history, consistency; a short VoI‑aware beam.
- Diagnostics:
provider,lastModel,rawSamples— audit of whether LLM/MCP/heuristics were used. - Robust parsing: JSON extraction,
<think>…</think>stripping, heuristic fallback when raw output is weak. - Session tools (
start,step,multi-step,summarize,solve); the primary result is always incontent[0].text(JSON).
Install
cd mcp-reasoning-booster
npm install && npm run build
MCP client configuration (mcp.json)
Add the server under your MCP-enabled client config. The exact file location depends on your client (see its docs). A portable shape:
{
"mcpServers": {
"reasoning-booster": {
"command": "node",
"args": ["./dist/index.js"],
"cwd": "./mcp-reasoning-booster",
"transport": "stdio"
}
}
}
Notes:
- Prefer absolute or workspace-relative
cwd(example assumes the repo root containsmcp-reasoning-booster/). - API keys are read from
mcp-reasoning-booster/secrets.local.txtorsecrets.txt(preferred). You do not need to place keys inmcp.json. - Optional fallback: environment variables
OPENAI_API_KEY,OPENAI_MODEL,CEREBRAS_API_KEY,CEREBRAS_MODEL,OPENAI_BASE_URL,CEREBRAS_BASE_URLare supported if present.
Tooling contract (parse-first)
- Primary output is always JSON in
content[0].text. - Tools and inputs:
solve:{ task, iterations?, config?, seedHints?, outputPath?, outputFormat? }- Returns JSON:
{ sessionId, summary, steps, hints, config, diagnostics, arbiterPicks, lastRawResponse }(always includesarbiterPicksandlastRawResponse)
- Returns JSON:
start:{ task, config?, seedHints? }→{ sessionId, state, config }step:{ sessionId, overrideNumCandidates?, addHints? }→{ chosen, candidates, state }multi-step:{ sessionId, iterations, overrideNumCandidates?, addHints? }→{ state }get-state:{ sessionId }→ full session object (JSON)summarize:{ sessionId }→content[0].textis JSON{ sessionId, summary },content[1].textis human-readable textsolve-text:{ task, iterations?, config?, seedHints?, outputPath? }→ primary output is plain text summary
Client rule: always JSON.parse the first text content; additional text blocks are only for convenience.
Quickstart (one-shot)
Call solve once and parse JSON from content[0].text:
{ "name": "solve", "arguments": { "task": "your task", "iterations": 8, "config": { "useSampling": true, "numCandidates": 5 } } }
If you pass outputPath, the same payload is also written to disk; otherwise no files are written.
Plain text (no JSON parsing) via solve-text:
{ "name": "solve-text", "arguments": { "task": "your task", "iterations": 8, "config": { "useSampling": true, "numCandidates": 5 } } }
Sampling priority
- Direct HTTP (Cerebras/OpenAI) if API keys are present
- MCP sampling (client exposes
sampling) - Heuristic fallback (no LLM)
Notes:
- Cerebras requires BOTH
CEREBRAS_API_KEYandCEREBRAS_MODEL; no hardcoded model fallback.
Recommended settings
- Strong models:
numCandidates: 7–9,samplingMaxTokens: 2000–4000,beamWidth: 2,beamDepth: 2,llmMaxCalls: 12–24,voiAlpha: 0.5–0.8 - Control budget: cap
llmMaxCalls; if stagnating, increaseiterationsornumCandidates
Demo
npx --yes tsx tests/demo_sampling.ts --task "Plan a 3-step experiment to test if X causes Y under constraint Z."
Secrets file format
Create mcp-reasoning-booster/secrets.local.txt:
CEREBRAS_API_KEY=...
CEREBRAS_MODEL=qwen-3-235b-a22b-thinking-2507
CEREBRAS_BASE_URL=https://api.cerebras.ai/v1
# Optional OpenAI
OPENAI_API_KEY=...
OPENAI_MODEL=gpt-4o-mini
OPENAI_BASE_URL=https://api.openai.com/v1
Run from shell (OS-specific)
- Windows PowerShell 5.x (production, real LLM):
Set-Location mcp-reasoning-booster; npm run build; npx --yes tsx tests\demo_sampling.ts --task "Your task here"
- PowerShell 7+ (pwsh) or cmd.exe (production, real LLM):
cd mcp-reasoning-booster && npm run build && npx --yes tsx tests\demo_sampling.ts --task "Your task here"
- Bash (macOS/Linux) (production, real LLM):
cd mcp-reasoning-booster && npm run build && npx tsx tests/demo_sampling.ts --task 'Your task here'
- Windows PowerShell 5.x (mock, offline test only):
Set-Location mcp-reasoning-booster; npm run build; npx --yes tsx tests\demo_sampling.ts --sampling=mock --task "Plan a tiny A/B test to choose CTA color with a measurable metric"
- PowerShell 7+ (pwsh) or cmd.exe (mock):
cd mcp-reasoning-booster && npm run build && npx --yes tsx tests\demo_sampling.ts --sampling=mock --task "Plan a tiny A/B test to choose CTA color with a measurable metric"
- Bash (macOS/Linux) (mock):
cd mcp-reasoning-booster && npm run build && npx tsx tests/demo_sampling.ts --sampling=mock --task 'Plan a tiny A/B test to choose CTA color with a measurable metric'
Usage tool (for AI clients)
- First call
usageto get copy/paste how‑to,mcp.jsontemplate, OS shell commands, and examples. usagereturns its PRIMARY payload as JSON incontent[0].text.