agentic-workflow-mcp

Antarpreet/agentic-workflow-mcp

3.2

If you are the rightful owner of agentic-workflow-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.

The Local Agentic Workflow MCP Server facilitates the execution of Agentic Workflows using a local LLM server via the Ollama CLI, optimized for use with VS Code.

Tools
  1. display_graph

    Generates a graph image from the workflow configuration.

  2. start_workflow

    Initiates the Agentic Workflow with a given prompt.

  3. embed_files

    Creates embeddings for files and stores them in the local vector database.

  4. visualize_embeddings

    Generates a visualization of the embeddings in the local vector database.

Local Agentic Workflow MCP Server

This MCP server allows you to run the Agentic Workflows using a local LLM server using Ollama CLI. This has been tested for VS Code. The workflows are designed as follows:

For supported workflow example configurations, see the file.

Table of Contents


Articles


Prerequisites

- Python 3.8 or higher
- pip (for installing Python packages)
- Ollama CLI (for local LLMs)

Installation

  1. Clone the repository:

    git clone https://github.com/Antarpreet/agentic-workflow-mcp.git
    
  2. Start an LLM server using the ollama CLI. For example, to start the llama3.2:3b model, run:

    ollama run llama3.2:3b
    

    This will start the LLM server on http://localhost:11434 by default.

    If you are using tools in your workflow, please ensure the model you are using supports them: Models supporting tools

    If you will be using local vector embeddings in your workflow, you can also pull the nomic-embed-text model using the following command:

    ollama pull nomic-embed-text
    

    Other embedding models can be found here: Embedding Models.

  3. Install the required Python packages:

     pip install -r requirements.txt
    
  4. Add MCP Server to VS Code:

    • Open .vscode/mcp.json in your workspace folder. If it doesn't exist, create it.
    • Update PATH_TO_YOUR_CONFIG in the WORKFLOW_CONFIG_PATH environment variable to point to the config file in your workspace folder.
    • The default config uses workspaceFolder environment variable from VS Code to get the path of the workspace.
    • If you would like to use User Settings, make sure to replace the environment variable with the absolute path of your workspace folder.
    • You can open the User settings.json file directly by using the command Preferences: Open User Settings (JSON) in the Command Palette for updating User Settings and add the following config in a mcp object: mcp: { "servers": ... }.
    • The path to the config.json file in the WORKFLOW_CONFIG_PATH environment variable PATH_TO_YOUR_CONFIG should point to the config.json file in your workspace folder. This allows you to use different configurations for different projects.
    // .vscode/mcp.json in your workspace folder
    {
        "servers": {
            "Agentic Workflow": {
                "type": "stdio",
                "command": "python",
                "args": [
                    "-m",
                    "uv",
                    "run",
                    "mcp",
                    "run",
                    // Linux/MacOS
                    "~/agentic-workflow-mcp/server.py"
                    // Windows
                    "%USERPROFILE%\\agentic-workflow-mcp\\server.py"
                ],
                "env": {
                    "WORKSPACE_PATH": "${workspaceFolder}",
                    "WORKFLOW_CONFIG_PATH": "${workspaceFolder}/PATH_TO_YOUR_CONFIG/config.json",
                }
            }
        }
    }
    
  5. Add Config for the MCP Server as follows:

    • Use one of the default configurations from config_examples in your config.json file as needed. The config settings are detailed further below. There are example configurations in the config_examples folder. You can use them as a reference to create your own configuration.

    • Copy the server folder to the user folder. (C:\Users\<username>\agentic-workflow-mcp on Windows or ~/agentic-workflow-mcp on Linux/MacOS). This will make it easier to access the server files across different projects. You can do this by running the following commands in your terminal:

      Windows:

      xcopy /E /I agentic-workflow-mcp %USERPROFILE%\agentic-workflow-mcp
      

      Mac/Linux:

      rm -rf ~/agentic-workflow-mcp
      cp -r agentic-workflow-mcp ~/agentic-workflow-mcp
      

    Anytime you make any changes to these files, copy them to the user folder again and restart the MCP server in the .vscode/mcp.json file for the changes to take effect.

  6. Start the MCP server:

    • Click the Start button above the MCP server configuration in the .vscode/mcp.json file in your workspace folder.
    • This will start the MCP server; you can see the logs in the Output panel under MCP: Agentic Workflow by clicking either the Running or Error button above the MCP server configuration.
  7. Start using the MCP server:

    • Open GitHub Copilot in VS Code and switch to Agent mode.
    • You should see the Agentic Workflow MCP server and start_workflow tool in the Copilot tools panel.
    • You can now start using the MCP tools. Prompt example:
    // This will create a `graph.png` file in your workspace folder.
    // It's recommended to use this before running the workflow
    // to see the graph of the agents and their connections.
    Use MCP Tools to display the graph.
    // This will start the workflow.
    Use MCP Tools to start a workflow to YOUR_PROMPT_HERE.
    // This will create embeddings for the files passed in the prompt.
    Use MCP tool to embed files #file:Readme.md
    // This will create a 2D or 3D visualization of the embeddings
    // for the default collection from the config unless specified in the prompt.
    Use MCP tool to visualize embeddings.
    

Config Settings

Workflow

KeyTypeDescriptionRequiredDefaults
default_modelstringThe default model to use for the LLM server.truellama3.2:3b
default_temperaturenumberThe default temperature to use for the LLM server.false0.0
recursion_limitintegerThe recursion limit for the LLM server.false25
embedding_modelstringThe embedding model to use for the LLM server.falsenomic-embed-text
collection_namestringThe name of the ChromaDB vector database collection to use for the LLM server.falselangchain_chroma_collection
delete_missing_embeddingsbooleanWhether to delete the embeddings for files that are no longer present in the workspace.falsetrue
vector_directorystringThe directory to store the vector database.falsechroma_vector_db
rag_prompt_templatestringThe prompt template for the RAG agent. Use single curly-braces for {context} and {input} injection.falseAnswer the following question based only on the provided context: <context> {context} </context> Question: {input}
state_schemaobjectThe schema for the workflow state. The default properties are always available, you can add your custom properties in the config file. If the custom properties are not defined in the schema, the workflow might not function as intended. The input property changes from agent to agent, allowing the output of one agent to be used as the input for another agent. The user_input is the initial user prompt. All the agents' output is also added to the state automatically to be accessed anywhere in the workflow in the format "YOUR_AGENT_NAME"_output, for e.g, RetrieverAgent_output where agent name is RetrieverAgent.false{"type": "object", "properties": {"user_input": {"type": "string"},"input": {"type": "string"},"final_output": {"type": "string"}}, "required": ["input","final_output"]}
agentsobject[]The agents used in the workflow.trueAgent
orchestratorobjectThe orchestrator agent configuration.falseOrchestrator
evaluator_optimizerobjectThe evaluator configuration.falseEvaluator
edgesobject[]The edges between the agents in the workflow.falseEdge
parallelobject[]The parallel agents configuration.falseParallel
branchesobject[]The branches in the workflow.falseBranch
routersobject[]The routers in the workflow.falseRouter

Agent

KeyTypeDescriptionRequiredExample
namestringThe name of the agent.trueOrchestrator Agent
model_namestringThe model to use for the agent. If different from the default model.falsellama3.2:3b
temperaturenumberThe temperature to use for the agent. If different from the default temperature.false0.0
promptstringThe prompt to use for the agent. This takes precedence over prompt_file. The state properties can be dynamically injected using double curly-braces {{AgentName_output}}.trueYou are an agent that orchestrates the workflow.
prompt_filestringEither the absolute path to the prompt file or path to the prompt file in the format agentic-workflow-mcp/YOUR_PROMPT_FILE_NAME if the prompt file is added to the agentic-workflow-mcp in this repo.falseprompt.txt
prompt_state_varsstring[]The state variables to use in the agent prompt. These will be replaced with the values from the workflow state. These can be used in the prompt using {{var_name}}.false["user_input", "input"]
human_promptstringThe prompt to use for the human input. If not provided, the default human prompt will be used.falseFollow the system prompt instructions and provide response
human_prompt_filestringThe prompt file to use for the human input. If not provided, the default human prompt file will be used.falsehuman_prompt.txt
human_prompt_state_varsstring[]The state variables to use in the human prompt. These will be replaced with the values from the workflow state. These can be used in the prompt using {{var_name}}.false["agent_output"]
output_decision_keysstring[]The keys in the output that will be used in the workflow state.false["decision_key"]
output_formatobjectThe output format for the agent.false{"type": "object", "properties": {"response": {"type": "string"}}, "required": ["response"]}
toolsstring[]The tools to use for the agent.false["read_file"]
tool_functionsobject[]The functions to use for the tools.false{"read_file": Tool}
tool_output_extractobject[]The output extraction rules for the agent prompt. This is used to extract the output from the tool response and use it in the prompt using {{var_name}}.false[Tool Output]
human_tool_output_extractobject[]The output extraction rules for the human prompt. This is used to extract the output from the tool response and use it in the human prompt using {{var_name}}.false[Tool Output]
embeddings_collection_namestringThe name of the ChromaDB vector database collection to use for the agent.falselangchain_chroma_collection

Tool

KeyTypeDescriptionRequiredExample
descriptionstringThe description of the tool.trueReads the contents of a file and returns it as a string.
function_stringstringThe function string to use for the tool.truelambda filename, workspace_path=None: open(filename if workspace_path is None else f'{workspace_path}/{filename}', 'r', encoding='utf-8').read()

Tool Output

KeyTypeDescriptionRequiredExample
var_namestringThe variable name to use in the prompt for the tool output. This will be replaced with the value from the tool response.trueagent_output
agent_namestringThe name of the agent that will use the tool.trueOrchestratorAgent
tool_namestringThe name of the tool that will be used in the agent.trueread_file
response_indexintegerThe index of the response in the tool response if it's a list of values. This is used to extract the output from the tool response. If not provided, the full response will be used. Either this or response_key should exist. If both provided, this will be ignored.false``
response_keystringThe key in the tool response that will be used to extract the output if it's a JSON object. If not provided, the full response will be used. Either this or response_index should exist.false``

Orchestrator

KeyTypeDescriptionRequiredExample
namestringThe name of the orchestrator agent.trueOrchestratorAgent
model_namestringThe model to use for the orchestrator agent.falsellama3.2:3b
temperaturenumberThe temperature to use for the orchestrator agent.false0.0
aggregatorstringThe name of the aggregator agent.trueAggregatorAgent
next_agentstringThe next agent to take after the orchestration. If not specified, defaults to __end__ representing end of the workflow.falseNextAgent
promptstringThe prompt to use for the orchestrator agent.trueYou are an agent that orchestrates the workflow.
prompt_filestringThe prompt file to use for the orchestrator agent.falseprompt.txt
output_decision_keysstring[]The keys in the output that will be used in the workflow state.false["decision_key"]
output_formatobjectThe output format for the orchestrator agent.false{"type": "object", "properties": {"response": {"type": "string"}}, "required": ["response"]}
toolsstring[]The tools to use for the orchestrator agent.false["read_file"]
tool_functionsobject[]The functions to use for the tools.false{"read_file": TOOL}
workersstring[]The workers to use for the orchestrator agent.true["Agent1", "Agent2"]
supervise_workersbooleanWhether to supervise the workers.falsefalse
can_end_workflowbooleanWhether the orchestrator can end the workflow.falsefalse
completion_conditionstringThe completion condition for the orchestrator agent.truelambda state: state.get('final_output') is not None

Evaluator

KeyTypeDescriptionRequiredExample
executorstringThe name of the executor agent.trueExecutorAgent
evaluatorstringThe name of the evaluator agent.trueEvaluatorAgent
optimizerstringThe name of the optimizer agent.trueOptimizerAgent
next_agentstringThe next agent to take after the evaluation. If not specified, defaults to __end__ representing end of the workflow.falseAggregatorAgent
quality_conditionstringThe quality condition for the evaluator agent.truelambda state: state.get('quality_score', 0) >= state.get('quality_threshold', 0.8)
max_iterationsintegerThe maximum number of iterations for the evaluator agent.false5

Edge

KeyTypeDescriptionRequiredExample
sourcestringThe source agent. The value can also be __start__ representing start of the workflow.trueOrchestratorAgent
targetstringThe target agent. The value can also be __end__ representing end of the workflow.trueAggregatorAgent

Parallel

KeyTypeDescriptionRequiredExample
sourcestringThe source agent that will call the parallel agents.trueSplitAgent
nodesstring[]The parallel agents.true["Agent1", "Agent2"]
joinstringThe agent that will join the responses from parallel agents.trueJoinAgent

Branch

KeyTypeDescriptionRequiredExample
sourcestringThe source agent that will call the branch agents.trueInputClassifierAgent
conditionstringThe condition for the branch.truelambda state: state.get('class')'
targetsobjectThe target agents for the branch.true{"class1": "Agent1", "class2": "Agent2"}

Router

KeyTypeDescriptionRequiredExample
sourcestringThe source agent that will call the router agents.trueRouterAgent
router_functionstringThe function to use for the router.truelambda state: state.get('next_step')

Environment Variables

These are the environment variables that are used in the MCP server. You can set them in the .vscode/mcp.json file as shown above.

KeyTypeDescriptionRequiredDefaults
WORKSPACE_PATHstringThe workspace path to the files to read.true${workspaceFolder}
WORKFLOW_CONFIG_PATHstringThe path to the config file.true${workspaceFolder}/PATH_TO_YOUR_CONFIG/config.json

MCP Tools

display_graph

Generates a graph image from the workflow configuration and saves it to a graph.png file in the workspace folder. This is useful for visualizing the workflow and understanding the connections between agents. It can also generate a mermaid diagram for the workflow.

start_workflow

This tool is used to start the Agentic Workflow. It takes a prompt as input and returns the result of the workflow.

embed_files

This tool creates embeddings for one or more files and stores them in the local ChromaDB vector database. These embeddings can be used using the retrieve_embeddings tool in the agent configuration.

visualize_embeddings

Generates a 2D or 3D visualization of the embeddings in the local ChromaDB vector database. This is useful for understanding the distribution of the embeddings and identifying clusters or patterns in the data.


MCP Resources

These can be accessed using MCP: Browse Resources command in the Command Palette (Ctrl+Shift+P or Cmd+Shift+P on Mac).

get_agents

Returns the list of agents defined in the workflow configuration.

get_state_schema

Returns the state schema defined in the workflow configuration containing both default and user-defined properties.


Custom Embeddings for RAG

Custom Embeddings for your local files can be created using the embed_files tool.

This tool creates embeddings for one or more files and stores them in the local ChromaDB vector database.

The local chroma_vector_db vector database is created in the workspace folder. You can add it to your .gitignore file to avoid committing it to your repository.

If the absolute path to the file is not provided, the tool will look for the file in the workspace folder. The workspace_path variable is set to ${workspaceFolder} by default, which is the path to your workspace folder.

The embeddings are automatically created, updated and deleted when invoking the embed_files tool. delete_missing_embeddings is set to true by default. This means that if a file is deleted from the workspace, its embedding will be deleted from the vector database next time the embed_files tool is invoked.

The local embeddings can be made available to any agent in the chain by using the retrieve_embeddings tool in the agent configuration. This tool will retrieve the embeddings from the local vector database and use them to answer questions.

You can visualize the embeddings using the visualize_embeddings tool. This will create a 2D or 3D visualization of the embeddings in the local ChromaDB vector database. This is useful for understanding the distribution of the embeddings and identifying clusters or patterns in the data.


Local LLM Tools

These are local tools available to the local Ollama LLM server. You can use these tools in your workflow to perform various tasks. These tools will be invoked as part of the workflow so you don't have to worry about calling them separately. The tools are defined in the config.json file and can be used in the workflow by specifying the tool names in the agent config.

You can add your own tools directly in the config.json file as described above.

read_file

Reads the content of a file and returns it as a string.

ItemTypeDescriptionRequiredDefaults
file_pathstringThe path to the file to read.true""
workspace_pathstringThe workspace path to the file to read.false${workspaceFolder}

read_multiple_files

Reads the content of multiple files and returns them as a single string.

ItemTypeDescriptionRequiredDefaults
file_pathsstring[]The paths to the files to read.true[]
workspace_pathstringThe workspace path to the files to read.false${workspaceFolder}

read_multiple_files_with_id

Reads the content of multiple files and returns them as an object with file IDs as keys and file contents as values.

ItemTypeDescriptionRequiredDefaults
file_pathsstring[]The paths to the files to read.true[]
workspace_pathstringThe workspace path to the files to read.false${workspaceFolder}

list_files

Lists all files in a given directory.

ItemTypeDescriptionRequiredDefaults
directorystringThe path to the directory to list files from.true""
workspace_pathstringThe workspace path to the directory to list files from.false${workspaceFolder}

write_file

Writes the given content to a file. Creates directories if they don't exist.

ItemTypeDescriptionRequiredDefaults
file_pathstringThe path to the file to write to.true""
contentstringThe content to write to the file.true""
workspace_pathstringThe workspace path to the file to write to.false${workspaceFolder}

write_file_lines

Write lines content at the specified line numbers to a file. Creates directories if they don't exist.

ItemTypeDescriptionRequiredDefaults
file_pathstringThe path to the file to write to.true""
linesobjectThe object containing line numbers as keys and content as values.true{}
workspace_pathstringThe workspace path to the file to write to.false${workspaceFolder}

append_file

Appends the given content to a file. Creates directories if they don't exist.

ItemTypeDescriptionRequiredDefaults
file_pathstringThe path to the file to append to.true""
contentstringThe content to append to the file.true""
workspace_pathstringThe workspace path to the file to append to.false${workspaceFolder}

append_file_lines

Appends lines content at the specified line numbers to a file. Creates directories if they don't exist.

ItemTypeDescriptionRequiredDefaults
file_pathstringThe path to the file to append to.true""
linesobjectThe object containing line numbers as keys and content as values.true{}
workspace_pathstringThe workspace path to the file to append to.false${workspaceFolder}

web_search

Performs a web search using DuckDuckGo and returns the results.

ItemTypeDescriptionRequiredDefaults
querystringThe search query.true""
max_resultsintegerThe maximum number of results to return.false5

api_fetch

Fetch data from an API endpoint.

ItemTypeDescriptionRequiredDefaults
urlstringThe API endpoint URL.true""
methodstringThe HTTP method to use (GET, POST, etc.).falseGET
headersobjectThe headers to include in the request.false{}
paramsobjectThe query parameters to include in the request.false{}
dataobjectThe data to include in the request body.false{}
jsonobjectThe JSON data to include in the request body.false{}
timeoutintegerThe timeout for the request in seconds.false10

run_shell_command

Runs a shell command and returns the output.

ItemTypeDescriptionRequiredDefaults
commandstringThe shell command to run.true""

validate_xml

Validates an XML file against a given XSD schema.

This uses xmllint cli command to validate the XML file. Make sure to install the command for your operating system.

ItemTypeDescriptionRequiredDefaults
xml_file_pathstringThe path to the XML file to validate.true""
xsd_file_pathstringThe path to the XSD schema file.true""
workspace_pathstringThe workspace path to the files to validate.false${workspaceFolder}

retrieve_embeddings

Fetches the embeddings from the local vector database and uses them to answer questions.

ItemTypeDescriptionRequiredDefaults
inputstringThe input to the agent.true""

modify_embeddings

Updates the embeddings for the specified files.

ItemTypeDescriptionRequiredDefaults
file_pathsstring[]The paths to the files to update.true[]
use_git_ignorebooleanWhether to use all of the .gitignore files in the workspace to determine which files to update.falsetrue
exclude_file_pathsstring[]The paths to the files to exclude from the update.false[]

Troubleshooting

  • If the MCP server is not making any requests to the LLM server, do the following:

    1. Restart VS Code as a sanity check.
    2. Ensure the Ollama LLM server is running and accessible.
    3. Copy the server files again using the commands above, another sanity check.
    4. Restart the MCP server in the .vscode/mcp.json file.
    5. Create a new chat in GitHub Copilot and switch to Agent mode.
  • You can check the logs using:

    Windows:

    type %homedrive%%homepath%\agentic-workflow-mcp\logs.txt
    

    Mac/Linux:

    tail -f ~/agentic-workflow-mcp/logs.txt
    
  • If you want to clean the local chroma db, you can do by deleting the chroma_vector_db folder in your workspace folder. This will delete all the embeddings and you can start fresh. Make sure to restart the MCP server after deleting the folder in order to initialize the empty database.

Using Workflows without MCP Server

If you want to use the workflows without the MCP server, you can do so by directly importing the python classes as used in the file. This way the workflows can be deployed as a web service to a cloud provider using the same configuration files.