JEdward7777/js-translation-helps-proxy
If you are the rightful owner of js-translation-helps-proxy and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to dayong@mcphub.com.
The JS Translation Helps Proxy is a production-ready TypeScript MCP proxy designed for translation-helps with multiple interfaces, optimized for deployment on CloudFlare Workers.
WARNING: This project was vibe coded including this readme. Take the instructions with a grain of salt and do not be fooled by the overly optimistic documentation.
JS Translation Helps Proxy
A production-ready TypeScript MCP proxy for translation-helps with multiple interfaces, built for CloudFlare Workers.
📋 Table of Contents
- Overview
- Features
- Quick Start
- Interfaces
- Interface Comparison
- Documentation
- Testing
- Deployment
- Contributing
- License
🎯 Overview
This project provides a production-ready unified proxy service that bridges translation-helps APIs with multiple interface protocols. All 5 interfaces are fully implemented and tested with 98.8% test coverage.
Upstream Service
The upstream translation-helps-mcp service is fully MCP-compliant (as of v6.6.3), providing all tools via the standard MCP protocol at https://translation-helps-mcp.pages.dev/api/mcp. This proxy uses dynamic tool discovery to stay in sync with upstream changes automatically.
✨ Features
- 5 Complete Interfaces - Core API, MCP HTTP, stdio, OpenAI API, LLM Helper
- 162 Tests - 98.8% passing (160/162), comprehensive coverage
- CloudFlare Workers - Serverless deployment ready
- Type-Safe - Full TypeScript with strict mode
- Flexible Filtering - Tool filtering, parameter hiding, note filtering
- Production Ready - Error handling, logging, caching
- Well Documented - Complete docs for all interfaces
🏗️ Architecture
See for detailed system design and component descriptions.
📊 Project Stats
- Lines of Code: ~5,000+
- Test Coverage: 98.8% (160/162 tests passing)
- Interfaces: 5 complete interfaces
- Documentation: 8 comprehensive guides
- Examples: Multiple configuration examples
Project Structure
src/
├── core/ # Interface 1: Core API
├── mcp-server/ # Interface 2: HTTP MCP
├── stdio-server/ # Interface 3: stdio MCP
├── openai-api/ # Interface 4: OpenAI-compatible API
├── llm-helper/ # Interface 5: OpenAI-compatible TypeScript client
└── shared/ # Shared utilities
tests/
├── unit/
├── integration/
└── e2e/
dist/
├── cjs/ # CommonJS build (for require())
└── esm/ # ESM build (for import)
Getting Started
Prerequisites
- Node.js >= 20.17.0
- npm or yarn
Installation
npm install
Configuration
- Copy
.env.exampleto.env - Fill in your API keys and configuration values
Development
# Build the project (creates both CJS and ESM builds)
npm run build
# Build only CJS
npm run build:cjs
# Build only ESM
npm run build:esm
# Run in development mode (stdio server)
npm run dev
# Run HTTP server in development mode (Wrangler)
npm run dev:http
# Run HTTP server in development mode (Native Node.js with debugging)
npm run dev:node
# Run tests
npm run test
# Lint code
npm run lint
Deployment
# Deploy to CloudFlare Workers
npm run deploy
Usage
Interface 1: Core API (Direct TypeScript/JavaScript)
The core API provides direct programmatic access to translation helps tools. Supports both CommonJS and ESM for maximum compatibility.
ESM (import):
import { TranslationHelpsClient } from 'js-translation-helps-proxy';
const client = new TranslationHelpsClient({
enabledTools: ['fetch_scripture', 'fetch_translation_notes'],
filterBookChapterNotes: true,
});
// Call tools using the generic callTool method
const scripture = await client.callTool('fetch_scripture', {
reference: 'John 3:16',
});
CommonJS (require):
const { TranslationHelpsClient } = require('js-translation-helps-proxy');
const client = new TranslationHelpsClient({
enabledTools: ['fetch_scripture', 'fetch_translation_notes'],
filterBookChapterNotes: true,
});
Documentation: See for complete API reference.
Interface 2: HTTP MCP Server
Web-based MCP server using official Streamable HTTP transport, compatible with MCP Inspector and standard MCP clients.
Start Server:
# Development (Wrangler - CloudFlare Workers local runtime)
npm run dev:http
# Development (Native Node.js - better for debugging)
npm run dev:node
# Production (CloudFlare Workers)
npm run deploy
Endpoint:
/mcp- Official MCP Streamable HTTP endpoint (POST + GET + DELETE)
Example:
# Initialize session
curl -X POST http://localhost:8787/mcp \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"id": 1,
"method": "initialize",
"params": {
"protocolVersion": "2024-11-05",
"capabilities": {},
"clientInfo": {"name": "test-client", "version": "1.0.0"}
}
}' -i
# List tools (use Mcp-Session-Id from initialize response)
curl -X POST http://localhost:8787/mcp \
-H "Content-Type: application/json" \
-H "Mcp-Session-Id: <session-id>" \
-d '{
"jsonrpc": "2.0",
"id": 2,
"method": "tools/list"
}'
MCP Inspector:
# Test with MCP Inspector
npx @modelcontextprotocol/inspector
# Connect to: http://localhost:8787/mcp
Key Features:
- ✅ Official MCP Streamable HTTP transport
- ✅ Compatible with MCP Inspector
- ✅ Client-controlled filters (via configuration)
- ✅ Session management with SSE streaming
- ✅ CloudFlare Workers compatible
Documentation:
Interface 3: stdio MCP Interface (On-Demand Process)
On-demand process launched by MCP clients (Claude Desktop, Cline, etc.) - not a persistent server.
Key Advantages:
- ✅ No background processes - launched only when the client needs it
- ✅ Automatic lifecycle - terminates when the client disconnects
- ✅ Resource efficient - no idle processes consuming memory
- ✅ stdio transport - communicates via stdin/stdout with the parent MCP client
Unlike Interfaces 2 & 4 (persistent HTTP servers), this is a process that the MCP client spawns on-demand.
Quick Start:
# Run from npm (recommended)
npx js-translation-helps-proxy --help
# Or directly from GitHub:
# npx github:JEdward7777/js-translation-helps-proxy --help
# List available tools
npx js-translation-helps-proxy --list-tools
# Launch the process (for manual testing - normally the MCP client launches it)
npx js-translation-helps-proxy
Note: In normal use, your MCP client (Claude Desktop, Cline) automatically launches this process when needed. You don't need to start or manage it manually.
Configuration Options:
# Enable specific tools only
npx js-translation-helps-proxy --enabled-tools "fetch_scripture,fetch_translation_notes"
# Hide parameters from tool schemas
npx js-translation-helps-proxy --hide-params "language,organization"
# Filter book/chapter notes
npx js-translation-helps-proxy --filter-book-chapter-notes
# Set log level
npx js-translation-helps-proxy --log-level debug
MCP Client Setup:
For Claude Desktop, add to your config file:
{
"mcpServers": {
"translation-helps": {
"command": "npx",
"args": ["js-translation-helps-proxy"]
}
}
}
Or to use the latest GitHub version:
{
"mcpServers": {
"translation-helps": {
"command": "npx",
"args": ["github:JEdward7777/js-translation-helps-proxy"]
}
}
}
Key Features:
- ✅ On-demand process - no persistent server running
- ✅ Client-controlled filters - configure per MCP client
- ✅ Works with Claude Desktop, Cline, etc. - any MCP client supporting stdio
- ✅ stdio transport - standard input/output communication
- ✅ Easy npx deployment - client launches via npx when needed
Documentation: |
Interface 4: OpenAI-Compatible API
REST API that proxies to OpenAI with automatic Translation Helps tool injection and baked-in filters (see Interface 5 for TypeScript equivalent).
Start Server:
# Development (Wrangler - CloudFlare Workers local runtime)
npm run dev:http
# Development (Native Node.js - better for debugging)
npm run dev:node
# Production (CloudFlare Workers)
npm run deploy
Endpoints:
POST /v1/chat/completions- Chat completions with tool executionGET /v1/models- List available OpenAI models (proxied)GET /v1/tools- List available toolsGET /health- Health check
Example with OpenAI Client:
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8787/v1",
api_key="sk-YOUR-OPENAI-KEY" # Your actual OpenAI API key
)
response = client.chat.completions.create(
model="gpt-4o-mini", # Use any OpenAI model
messages=[
{"role": "user", "content": "Fetch scripture for John 3:16"}
]
)
print(response.choices[0].message.content)
Example with curl:
curl -X POST http://localhost:8787/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-YOUR-OPENAI-KEY" \
-d '{
"model": "gpt-4o-mini",
"messages": [
{"role": "user", "content": "Fetch John 3:16"}
]
}'
Key Features:
- ✅ Proxies to OpenAI: Uses real OpenAI models and API
- ✅ Automatic tool injection: Translation Helps tools added automatically
- ✅ Baked-in filters:
language=en,organization=unfoldingWord - ✅ Iterative tool execution: Handles tool calling loops
- ✅ Supports n > 1 and structured outputs
- ✅ CloudFlare Workers compatible
Documentation:
Interface 5: OpenAI-Compatible TypeScript Client
Drop-in replacement for OpenAI client as a TypeScript class with Translation Helps tools automatically integrated. Unlike Interface 4 (HTTP/REST API), this is a direct TypeScript client with no network serialization overhead. Both interfaces share the same OpenAI integration logic (see comparison table). Supports both CommonJS and ESM for maximum compatibility.
Quick Start (ESM):
import { LLMHelper } from 'js-translation-helps-proxy/llm-helper';
// Drop-in replacement for OpenAI client
const helper = new LLMHelper({
apiKey: process.env.OPENAI_API_KEY!,
});
// Use the same API as OpenAI
const response = await helper.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'What does John 3:16 say?' }],
n: 2 // Generate 2 completions
});
// Returns full OpenAI ChatCompletion response
console.log(response.choices[0].message.content);
console.log(response.choices[1].message.content); // When n > 1
Interchangeability with OpenAI:
import { LLMHelper } from 'js-translation-helps-proxy/llm-helper';
import OpenAI from 'openai';
// Can use either client with the same code!
const client: OpenAI | LLMHelper = useTranslationHelps
? new LLMHelper({ apiKey })
: new OpenAI({ apiKey });
// Same API works for both
const response = await client.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'Hello!' }]
});
Quick Start (CommonJS):
const { LLMHelper } = require('js-translation-helps-proxy/llm-helper');
const helper = new LLMHelper({
apiKey: process.env.OPENAI_API_KEY,
});
Key Features:
- ✅ Drop-in OpenAI replacement: Implements
OpenAI.chat.completions.create()interface - ✅ Full response compatibility: Returns complete OpenAI
ChatCompletionobjects - ✅ Shares logic with Interface 4: Same OpenAI SDK integration
- ✅ Supports all OpenAI parameters: Including
n > 1,temperature,response_format - ✅ Fixes
n > 1bug: All choices preserved in response - ✅ Automatic tool execution: Translation Helps tools work automatically
- ✅ Baked-in filters:
language=en,organization=unfoldingWord - ✅ Type-safe: Full TypeScript support
Documentation: |
Interface Comparison
| Feature | Interface 1 (Core) | Interface 2 (MCP HTTP) | Interface 3 (stdio) | Interface 4 (OpenAI REST API) | Interface 5 (OpenAI TypeScript Client) |
|---|---|---|---|---|---|
| Transport | Direct API | HTTP | stdio | HTTP/REST | TypeScript API |
| Backend | Direct | Direct | Direct | Proxies to OpenAI | Proxies to OpenAI |
| Network | N/A | Required | N/A | Required | Not required |
| API Key | Not required | Not required | Not required | Required (OpenAI) | Required (OpenAI) |
| Models | N/A | N/A | N/A | Any OpenAI model | Any OpenAI model |
| Filters | Configurable | Client-controlled | Client-controlled | Baked-in | Baked-in |
| Use Case | TypeScript apps | Web services | Desktop apps | LLM integrations (HTTP) | LLM integrations (TypeScript) |
| Deployment | Library | CloudFlare Workers | On-demand process | CloudFlare Workers | Library |
| Tool Execution | Manual | Manual | Manual | Automatic | Automatic |
| Lifecycle | N/A | Persistent server | Launched on-demand | Persistent server | N/A |
Choose Interface 2 or 3 when you need client-controlled filters (see or ). Choose Interface 3 specifically when you want no background processes (on-demand launching). Choose Interface 4 or 5 when you need OpenAI integration with automatic tool execution ( vs ).
Quick Start Guide
For Desktop Apps (Claude Desktop, Cline)
Use Interface 3 (stdio):
# Run from npm (recommended)
npx js-translation-helps-proxy
# Or directly from GitHub for latest development version:
# npx github:JEdward7777/js-translation-helps-proxy
For Web Services / APIs
Use Interface 2 (MCP HTTP):
# Using Wrangler (CloudFlare Workers runtime)
npm run dev:http
# Access at http://localhost:8787/mcp/*
# Using Native Node.js (better for debugging)
npm run dev:node
# Access at http://localhost:8787/mcp/*
For LLM Integrations (OpenAI-compatible)
Use Interface 4 (OpenAI API):
# Using Wrangler (CloudFlare Workers runtime)
npm run dev:http
# Access at http://localhost:8787/v1/*
# Using Native Node.js (better for debugging)
npm run dev:node
# Access at http://localhost:8787/v1/*
For TypeScript/JavaScript Projects
Use Interface 1 (Core API) - supports both ESM and CommonJS:
ESM:
import { TranslationHelpsClient } from 'js-translation-helps-proxy';
CommonJS:
const { TranslationHelpsClient } = require('js-translation-helps-proxy');
For LLM Integration in TypeScript/JavaScript
Use Interface 5 (LLM Helper) - supports both ESM and CommonJS:
ESM:
import { LLMHelper } from 'js-translation-helps-proxy/llm-helper';
const helper = new LLMHelper({
apiKey: process.env.OPENAI_API_KEY!,
});
const response = await helper.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'Fetch John 3:16' }]
});
CommonJS:
const { LLMHelper } = require('js-translation-helps-proxy/llm-helper');
📚 Documentation
- - Complete documentation hub
- - System architecture
- - Test coverage and strategy
- - CloudFlare Workers deployment
- - How to contribute
Interface Documentation
- - Interface 2 documentation
- - Interface 3 documentation
- - Interface 4 documentation
- - Interface 5 documentation
🧪 Testing
The project has comprehensive test coverage:
# Run all tests
npm test
# Run specific test suites
npm run test:unit # 65 unit tests
npm run test:integration # 80 integration tests
npm run test:e2e # 8 E2E tests
Test Results:
- ✅ 160 tests passing (98.8%)
- ⏭️ 2 tests skipped (require API keys)
See for detailed test documentation.
🚀 Deployment
CloudFlare Workers
# Build and deploy
npm run build
npm run deploy
See for complete deployment guide.
Local Development
# Start HTTP server (Wrangler - CloudFlare Workers runtime)
npm run dev:http
# Start HTTP server (Native Node.js - better for debugging)
npm run dev:node
# Start stdio server
npm run dev
VSCode Debugging
The project includes VSCode launch configurations for debugging:
- Debug HTTP Server - Native Node.js (Interfaces 2 & 4) - Debug MCP HTTP and OpenAI API servers
- Debug HTTP Server - Built (Interfaces 2 & 4) - Debug compiled HTTP servers
- Debug stdio Server (Interface 3) - Debug stdio MCP server (uses stdin/stdout, not HTTP)
- Debug Current Test File - Debug the currently open test file
To use:
- Open the file you want to debug
- Set breakpoints by clicking in the gutter
- Press
F5or go to Run > Start Debugging - Select the appropriate debug configuration
Important:
- Interface 3 (stdio) is an on-demand process launched by the MCP client, communicating via stdin/stdout (NOT HTTP/REST)
- Interfaces 2 & 4 are persistent HTTP/REST servers accessible at
http://localhost:8787 - Interface 3 has no background process - it's launched when needed and terminates when done
- The server will start with
LOG_LEVEL=debugfor detailed logging
🤝 Contributing
We welcome contributions! Please see for guidelines.
Quick Start for Contributors
- Fork and clone the repository
- Install dependencies:
npm install - Create a feature branch
- Make your changes with tests
- Run checks:
npm run lint && npm test - Submit a pull request
📄 License
MIT - See file for details.
🙏 Acknowledgments
- Translation Helps MCP - Fully MCP-compliant upstream server (v6.6.3+)
- Model Context Protocol - MCP specification
- CloudFlare Workers - Serverless platform
- All Contributors - Thank you!
📞 Support
- Documentation:
- Issues: GitHub Issues
- Discussions: GitHub Discussions
Version: 0.2.0 | Last Updated: 2025-11-23 | Status: Production Ready ✅
Dynamic Tool Discovery
This proxy uses dynamic tool discovery from the upstream MCP server. Tool schemas are fetched at runtime, ensuring we're always in sync with the upstream service. No manual updates needed when upstream adds/removes tools!