mcp_server_new

adrshkr/mcp_server_new

3.2

If you are the rightful owner of mcp_server_new and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.

The DOCQNA package integrates a Model Context Protocol (MCP) server with a modern CLI to streamline workflows involving document analysis, data interrogation, and content drafting.

Tools
4
Resources
0
Prompts
0

DOCQNA MCP Server & CLI

DOCQNA packages a Model Context Protocol (MCP) server together with a modern prompt-toolkit CLI so research teams can move between unstructured document analysis, structured-data interrogation, and downstream content drafting in a single workflow.

Highlights

  • FastMCP server exposing docs analysis, data interpreter, content writer, and research agent tools.
  • Modern TUI (poetry run docqna) with mode tabs, slash commands, streaming toggle, and nicely formatted output.
  • Unified source adapters translate incoming file/database specs so both structured and unstructured tools stay in sync.
  • Temp workspace (temp/) captures intermediate files and generated drafts for handoff between tools.

Requirements

  • Python 3.12+
  • Poetry 1.8+
  • Access keys for the LLM provider and any backing stores you plan to query (OpenAI, AWS S3, MySQL/Snowflake, etc.).

Setup

poetry env use 3.12           # optional but keeps versions aligned
poetry install
cp .env.example .env          # then fill in the secrets listed below

Populate .env with at least:

  • OPENAI_API_KEY (or alternative provider keys such as GOOGLE_API_KEY, ANTHROPIC_API_KEY).
  • AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_BUCKET_NAME for unstructured summaries stored in S3.
  • DB_URI / DB_NAME (or tool-specific overrides) for structured workflows.
  • Optional flags:
    • DOCQNA_MODERN_TUI=false to fall back to the legacy prompt loop.
    • DOCQNA_ENABLE_TOOL_BINDING=true to register MCP tool bindings when clients connect.
    • MCP_SERVER_URL if the CLI should target a non-default server.

Run the stack

# 1. Start the MCP server (HTTP streaming transport on :8000 by default)
poetry run python -m src.server

# 2. In a second shell, launch the CLI front-end
poetry run docqna

The CLI expects the server at http://127.0.0.1:8000/mcp. Adjust MCP_SERVER_URL if you proxy or deploy elsewhere.

CLI modes & commands

  • Tabs / slash commands: agent, docs, content, interpret map to the research agent, docs analyser, content writer, and data interpreter tools. Tabs can be cycled with Tab.
  • Streaming toggle: /stream (or the status chip in the footer) switches incremental updates on/off.
  • Workspace shortcuts: /list, /open, /delete operate on the temp/ folder so you can inspect generated artifacts quickly.
  • Status banner: DOCQNA header shows active server, streaming state, and currently loaded MCP tools.

Repository layout

  • src/ – MCP tool implementations (docs analyser, data interpreter, content writer, researcher agent, shared adapters).
  • cli/ – Rich/prompt-toolkit front ends (modern_tui.py, legacy tui.py, entrypoint docqna.py).
  • temp/ – Runtime scratch space for tool outputs (safe to clear).
  • test_data/ – Example inputs for local experiments.
  • start_server.sh – Convenience launcher for the MCP server with sane defaults.

Development tips

  • Keep temp/ tidy during long sessions to avoid stale references.
  • The server runs with fastmcp.FastMCP(stateless_http=True); restart it after modifying tool code to pick up changes.
  • When integrating with other MCP clients, set DOCQNA_ENABLE_TOOL_BINDING=true so the server advertises binding metadata on connect.