Antarpreet/agentic-workflow-mcp
If you are the rightful owner of agentic-workflow-mcp and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
The Local Agentic Workflow MCP Server facilitates the execution of Agentic Workflows using a local LLM server via the Ollama CLI, optimized for use with VS Code.
display_graph
Generates a graph image from the workflow configuration.
start_workflow
Initiates the Agentic Workflow with a given prompt.
embed_files
Creates embeddings for files and stores them in the local vector database.
visualize_embeddings
Generates a visualization of the embeddings in the local vector database.
Local Agentic Workflow MCP Server
This MCP server allows you to run the Agentic Workflows using a local LLM server using Ollama CLI. This has been tested for VS Code
. The workflows are designed as follows:
For supported workflow example configurations, see the file.
Table of Contents
- Articles
- Prerequisites
- Installation
- Config Settings
- Environment Variables
- MCP Tools
- MCP Resources
- Custom Embeddings for RAG
- Local LLM Tools
- Troubleshooting
- Using Workflows without MCP Server
Articles
- 🧠 How to Set Up a Local Agentic Workflow with MCP and Ollama (Without Losing Your Mind)
- 🧠Create Vector Embeddings for Your Local Agentic Workflow Using an MCP Server (The easy way)
Prerequisites
- Python 3.8 or higher
- pip (for installing Python packages)
- Ollama CLI (for local LLMs)
Installation
-
Clone the repository:
git clone https://github.com/Antarpreet/agentic-workflow-mcp.git
-
Start an LLM server using the ollama CLI. For example, to start the
llama3.2:3b
model, run:ollama run llama3.2:3b
This will start the LLM server on
http://localhost:11434
by default.If you are using tools in your workflow, please ensure the model you are using supports them: Models supporting tools
If you will be using local vector embeddings in your workflow, you can also pull the
nomic-embed-text
model using the following command:ollama pull nomic-embed-text
Other embedding models can be found here: Embedding Models.
-
Install the required Python packages:
pip install -r requirements.txt
-
Add MCP Server to VS Code:
- Open
.vscode/mcp.json
in your workspace folder. If it doesn't exist, create it. - Update
PATH_TO_YOUR_CONFIG
in theWORKFLOW_CONFIG_PATH
environment variable to point to the config file in your workspace folder.
- The default config uses
workspaceFolder
environment variable fromVS Code
to get the path of the workspace. - If you would like to use
User Settings
, make sure to replace the environment variable with the absolute path of your workspace folder. - You can open the User
settings.json
file directly by using the commandPreferences: Open User Settings (JSON)
in the Command Palette for updatingUser Settings
and add the following config in amcp
object:mcp: { "servers": ... }
. - The path to the
config.json
file in theWORKFLOW_CONFIG_PATH
environment variablePATH_TO_YOUR_CONFIG
should point to theconfig.json
file in your workspace folder. This allows you to use different configurations for different projects.
// .vscode/mcp.json in your workspace folder { "servers": { "Agentic Workflow": { "type": "stdio", "command": "python", "args": [ "-m", "uv", "run", "mcp", "run", // Linux/MacOS "~/agentic-workflow-mcp/server.py" // Windows "%USERPROFILE%\\agentic-workflow-mcp\\server.py" ], "env": { "WORKSPACE_PATH": "${workspaceFolder}", "WORKFLOW_CONFIG_PATH": "${workspaceFolder}/PATH_TO_YOUR_CONFIG/config.json", } } } }
- Open
-
Add Config for the MCP Server as follows:
-
Use one of the default configurations from
config_examples
in yourconfig.json
file as needed. The config settings are detailed further below. There are example configurations in theconfig_examples
folder. You can use them as a reference to create your own configuration. -
Copy the server folder to the user folder. (
C:\Users\<username>\agentic-workflow-mcp
on Windows or~/agentic-workflow-mcp
on Linux/MacOS). This will make it easier to access the server files across different projects. You can do this by running the following commands in your terminal:Windows:
xcopy /E /I agentic-workflow-mcp %USERPROFILE%\agentic-workflow-mcp
Mac/Linux:
rm -rf ~/agentic-workflow-mcp cp -r agentic-workflow-mcp ~/agentic-workflow-mcp
Anytime you make any changes to these files, copy them to the user folder again and restart the MCP server in the
.vscode/mcp.json
file for the changes to take effect. -
-
Start the MCP server:
- Click the
Start
button above the MCP server configuration in the.vscode/mcp.json
file in your workspace folder. - This will start the MCP server; you can see the logs in the Output panel under
MCP: Agentic Workflow
by clicking either theRunning
orError
button above the MCP server configuration.
- Click the
-
Start using the MCP server:
- Open GitHub Copilot in VS Code and switch to
Agent
mode. - You should see the
Agentic Workflow
MCP server andstart_workflow
tool in the Copilot tools panel. - You can now start using the MCP tools. Prompt example:
// This will create a `graph.png` file in your workspace folder. // It's recommended to use this before running the workflow // to see the graph of the agents and their connections. Use MCP Tools to display the graph. // This will start the workflow. Use MCP Tools to start a workflow to YOUR_PROMPT_HERE. // This will create embeddings for the files passed in the prompt. Use MCP tool to embed files #file:Readme.md // This will create a 2D or 3D visualization of the embeddings // for the default collection from the config unless specified in the prompt. Use MCP tool to visualize embeddings.
- Open GitHub Copilot in VS Code and switch to
Config Settings
Workflow
Key | Type | Description | Required | Defaults |
---|---|---|---|---|
default_model | string | The default model to use for the LLM server. | true | llama3.2:3b |
default_temperature | number | The default temperature to use for the LLM server. | false | 0.0 |
recursion_limit | integer | The recursion limit for the LLM server. | false | 25 |
embedding_model | string | The embedding model to use for the LLM server. | false | nomic-embed-text |
collection_name | string | The name of the ChromaDB vector database collection to use for the LLM server. | false | langchain_chroma_collection |
delete_missing_embeddings | boolean | Whether to delete the embeddings for files that are no longer present in the workspace. | false | true |
vector_directory | string | The directory to store the vector database. | false | chroma_vector_db |
rag_prompt_template | string | The prompt template for the RAG agent. Use single curly-braces for {context} and {input} injection. | false | Answer the following question based only on the provided context: <context> {context} </context> Question: {input} |
state_schema | object | The schema for the workflow state. The default properties are always available, you can add your custom properties in the config file. If the custom properties are not defined in the schema, the workflow might not function as intended. The input property changes from agent to agent, allowing the output of one agent to be used as the input for another agent. The user_input is the initial user prompt. All the agents' output is also added to the state automatically to be accessed anywhere in the workflow in the format "YOUR_AGENT_NAME"_output , for e.g, RetrieverAgent_output where agent name is RetrieverAgent . | false | {"type": "object", "properties": {"user_input": {"type": "string"},"input": {"type": "string"},"final_output": {"type": "string"}}, "required": ["input","final_output"]} |
agents | object[] | The agents used in the workflow. | true | Agent |
orchestrator | object | The orchestrator agent configuration. | false | Orchestrator |
evaluator_optimizer | object | The evaluator configuration. | false | Evaluator |
edges | object[] | The edges between the agents in the workflow. | false | Edge |
parallel | object[] | The parallel agents configuration. | false | Parallel |
branches | object[] | The branches in the workflow. | false | Branch |
routers | object[] | The routers in the workflow. | false | Router |
Agent
Key | Type | Description | Required | Example |
---|---|---|---|---|
name | string | The name of the agent. | true | Orchestrator Agent |
model_name | string | The model to use for the agent. If different from the default model. | false | llama3.2:3b |
temperature | number | The temperature to use for the agent. If different from the default temperature. | false | 0.0 |
prompt | string | The prompt to use for the agent. This takes precedence over prompt_file . The state properties can be dynamically injected using double curly-braces {{AgentName_output}} . | true | You are an agent that orchestrates the workflow. |
prompt_file | string | Either the absolute path to the prompt file or path to the prompt file in the format agentic-workflow-mcp/YOUR_PROMPT_FILE_NAME if the prompt file is added to the agentic-workflow-mcp in this repo. | false | prompt.txt |
prompt_state_vars | string[] | The state variables to use in the agent prompt. These will be replaced with the values from the workflow state. These can be used in the prompt using {{var_name}} . | false | ["user_input", "input"] |
human_prompt | string | The prompt to use for the human input. If not provided, the default human prompt will be used. | false | Follow the system prompt instructions and provide response |
human_prompt_file | string | The prompt file to use for the human input. If not provided, the default human prompt file will be used. | false | human_prompt.txt |
human_prompt_state_vars | string[] | The state variables to use in the human prompt. These will be replaced with the values from the workflow state. These can be used in the prompt using {{var_name}} . | false | ["agent_output"] |
output_decision_keys | string[] | The keys in the output that will be used in the workflow state. | false | ["decision_key"] |
output_format | object | The output format for the agent. | false | {"type": "object", "properties": {"response": {"type": "string"}}, "required": ["response"]} |
tools | string[] | The tools to use for the agent. | false | ["read_file"] |
tool_functions | object[] | The functions to use for the tools. | false | {"read_file": Tool} |
tool_output_extract | object[] | The output extraction rules for the agent prompt. This is used to extract the output from the tool response and use it in the prompt using {{var_name}} . | false | [ Tool Output] |
human_tool_output_extract | object[] | The output extraction rules for the human prompt. This is used to extract the output from the tool response and use it in the human prompt using {{var_name}} . | false | [ Tool Output] |
embeddings_collection_name | string | The name of the ChromaDB vector database collection to use for the agent. | false | langchain_chroma_collection |
Tool
Key | Type | Description | Required | Example |
---|---|---|---|---|
description | string | The description of the tool. | true | Reads the contents of a file and returns it as a string. |
function_string | string | The function string to use for the tool. | true | lambda filename, workspace_path=None: open(filename if workspace_path is None else f'{workspace_path}/{filename}', 'r', encoding='utf-8').read() |
Tool Output
Key | Type | Description | Required | Example |
---|---|---|---|---|
var_name | string | The variable name to use in the prompt for the tool output. This will be replaced with the value from the tool response. | true | agent_output |
agent_name | string | The name of the agent that will use the tool. | true | OrchestratorAgent |
tool_name | string | The name of the tool that will be used in the agent. | true | read_file |
response_index | integer | The index of the response in the tool response if it's a list of values. This is used to extract the output from the tool response. If not provided, the full response will be used. Either this or response_key should exist. If both provided, this will be ignored. | false | `` |
response_key | string | The key in the tool response that will be used to extract the output if it's a JSON object. If not provided, the full response will be used. Either this or response_index should exist. | false | `` |
Orchestrator
Key | Type | Description | Required | Example |
---|---|---|---|---|
name | string | The name of the orchestrator agent. | true | OrchestratorAgent |
model_name | string | The model to use for the orchestrator agent. | false | llama3.2:3b |
temperature | number | The temperature to use for the orchestrator agent. | false | 0.0 |
aggregator | string | The name of the aggregator agent. | true | AggregatorAgent |
next_agent | string | The next agent to take after the orchestration. If not specified, defaults to __end__ representing end of the workflow. | false | NextAgent |
prompt | string | The prompt to use for the orchestrator agent. | true | You are an agent that orchestrates the workflow. |
prompt_file | string | The prompt file to use for the orchestrator agent. | false | prompt.txt |
output_decision_keys | string[] | The keys in the output that will be used in the workflow state. | false | ["decision_key"] |
output_format | object | The output format for the orchestrator agent. | false | {"type": "object", "properties": {"response": {"type": "string"}}, "required": ["response"]} |
tools | string[] | The tools to use for the orchestrator agent. | false | ["read_file"] |
tool_functions | object[] | The functions to use for the tools. | false | {"read_file": TOOL} |
workers | string[] | The workers to use for the orchestrator agent. | true | ["Agent1", "Agent2"] |
supervise_workers | boolean | Whether to supervise the workers. | false | false |
can_end_workflow | boolean | Whether the orchestrator can end the workflow. | false | false |
completion_condition | string | The completion condition for the orchestrator agent. | true | lambda state: state.get('final_output') is not None |
Evaluator
Key | Type | Description | Required | Example |
---|---|---|---|---|
executor | string | The name of the executor agent. | true | ExecutorAgent |
evaluator | string | The name of the evaluator agent. | true | EvaluatorAgent |
optimizer | string | The name of the optimizer agent. | true | OptimizerAgent |
next_agent | string | The next agent to take after the evaluation. If not specified, defaults to __end__ representing end of the workflow. | false | AggregatorAgent |
quality_condition | string | The quality condition for the evaluator agent. | true | lambda state: state.get('quality_score', 0) >= state.get('quality_threshold', 0.8) |
max_iterations | integer | The maximum number of iterations for the evaluator agent. | false | 5 |
Edge
Key | Type | Description | Required | Example |
---|---|---|---|---|
source | string | The source agent. The value can also be __start__ representing start of the workflow. | true | OrchestratorAgent |
target | string | The target agent. The value can also be __end__ representing end of the workflow. | true | AggregatorAgent |
Parallel
Key | Type | Description | Required | Example |
---|---|---|---|---|
source | string | The source agent that will call the parallel agents. | true | SplitAgent |
nodes | string[] | The parallel agents. | true | ["Agent1", "Agent2"] |
join | string | The agent that will join the responses from parallel agents. | true | JoinAgent |
Branch
Key | Type | Description | Required | Example |
---|---|---|---|---|
source | string | The source agent that will call the branch agents. | true | InputClassifierAgent |
condition | string | The condition for the branch. | true | lambda state: state.get('class')' |
targets | object | The target agents for the branch. | true | {"class1": "Agent1", "class2": "Agent2"} |
Router
Key | Type | Description | Required | Example |
---|---|---|---|---|
source | string | The source agent that will call the router agents. | true | RouterAgent |
router_function | string | The function to use for the router. | true | lambda state: state.get('next_step') |
Environment Variables
These are the environment variables that are used in the MCP server. You can set them in the .vscode/mcp.json
file as shown above.
Key | Type | Description | Required | Defaults |
---|---|---|---|---|
WORKSPACE_PATH | string | The workspace path to the files to read. | true | ${workspaceFolder} |
WORKFLOW_CONFIG_PATH | string | The path to the config file. | true | ${workspaceFolder}/PATH_TO_YOUR_CONFIG/config.json |
MCP Tools
display_graph
Generates a graph image from the workflow configuration and saves it to a graph.png
file in the workspace folder. This is useful for visualizing the workflow and understanding the connections between agents. It can also generate a mermaid diagram for the workflow.
start_workflow
This tool is used to start the Agentic Workflow. It takes a prompt as input and returns the result of the workflow.
embed_files
This tool creates embeddings for one or more files and stores them in the local ChromaDB vector database. These embeddings can be used using the retrieve_embeddings
tool in the agent configuration.
visualize_embeddings
Generates a 2D or 3D visualization of the embeddings in the local ChromaDB vector database. This is useful for understanding the distribution of the embeddings and identifying clusters or patterns in the data.
MCP Resources
These can be accessed using MCP: Browse Resources
command in the Command Palette (Ctrl+Shift+P or Cmd+Shift+P on Mac).
get_agents
Returns the list of agents defined in the workflow configuration.
get_state_schema
Returns the state schema defined in the workflow configuration containing both default and user-defined properties.
Custom Embeddings for RAG
Custom Embeddings for your local files can be created using the embed_files
tool.
This tool creates embeddings for one or more files and stores them in the local ChromaDB vector database.
The local chroma_vector_db
vector database is created in the workspace folder. You can add it to your .gitignore
file to avoid committing it to your repository.
If the absolute path to the file is not provided, the tool will look for the file in the workspace folder. The workspace_path
variable is set to ${workspaceFolder}
by default, which is the path to your workspace folder.
The embeddings are automatically created, updated and deleted when invoking the embed_files
tool. delete_missing_embeddings
is set to true
by default. This means that if a file is deleted from the workspace, its embedding will be deleted from the vector database next time the embed_files
tool is invoked.
The local embeddings can be made available to any agent in the chain by using the retrieve_embeddings
tool in the agent configuration. This tool will retrieve the embeddings from the local vector database and use them to answer questions.
You can visualize the embeddings using the visualize_embeddings
tool. This will create a 2D or 3D visualization of the embeddings in the local ChromaDB vector database. This is useful for understanding the distribution of the embeddings and identifying clusters or patterns in the data.
Local LLM Tools
These are local tools available to the local Ollama LLM server. You can use these tools in your workflow to perform various tasks. These tools will be invoked as part of the workflow so you don't have to worry about calling them separately. The tools are defined in the config.json
file and can be used in the workflow by specifying the tool names in the agent config.
You can add your own tools directly in the
config.json
file as described above.
read_file
Reads the content of a file and returns it as a string.
Item | Type | Description | Required | Defaults |
---|---|---|---|---|
file_path | string | The path to the file to read. | true | "" |
workspace_path | string | The workspace path to the file to read. | false | ${workspaceFolder} |
read_multiple_files
Reads the content of multiple files and returns them as a single string.
Item | Type | Description | Required | Defaults |
---|---|---|---|---|
file_paths | string[] | The paths to the files to read. | true | [] |
workspace_path | string | The workspace path to the files to read. | false | ${workspaceFolder} |
read_multiple_files_with_id
Reads the content of multiple files and returns them as an object with file IDs as keys and file contents as values.
Item | Type | Description | Required | Defaults |
---|---|---|---|---|
file_paths | string[] | The paths to the files to read. | true | [] |
workspace_path | string | The workspace path to the files to read. | false | ${workspaceFolder} |
list_files
Lists all files in a given directory.
Item | Type | Description | Required | Defaults |
---|---|---|---|---|
directory | string | The path to the directory to list files from. | true | "" |
workspace_path | string | The workspace path to the directory to list files from. | false | ${workspaceFolder} |
write_file
Writes the given content to a file. Creates directories if they don't exist.
Item | Type | Description | Required | Defaults |
---|---|---|---|---|
file_path | string | The path to the file to write to. | true | "" |
content | string | The content to write to the file. | true | "" |
workspace_path | string | The workspace path to the file to write to. | false | ${workspaceFolder} |
write_file_lines
Write lines content at the specified line numbers to a file. Creates directories if they don't exist.
Item | Type | Description | Required | Defaults |
---|---|---|---|---|
file_path | string | The path to the file to write to. | true | "" |
lines | object | The object containing line numbers as keys and content as values. | true | {} |
workspace_path | string | The workspace path to the file to write to. | false | ${workspaceFolder} |
append_file
Appends the given content to a file. Creates directories if they don't exist.
Item | Type | Description | Required | Defaults |
---|---|---|---|---|
file_path | string | The path to the file to append to. | true | "" |
content | string | The content to append to the file. | true | "" |
workspace_path | string | The workspace path to the file to append to. | false | ${workspaceFolder} |
append_file_lines
Appends lines content at the specified line numbers to a file. Creates directories if they don't exist.
Item | Type | Description | Required | Defaults |
---|---|---|---|---|
file_path | string | The path to the file to append to. | true | "" |
lines | object | The object containing line numbers as keys and content as values. | true | {} |
workspace_path | string | The workspace path to the file to append to. | false | ${workspaceFolder} |
web_search
Performs a web search using DuckDuckGo and returns the results.
Item | Type | Description | Required | Defaults |
---|---|---|---|---|
query | string | The search query. | true | "" |
max_results | integer | The maximum number of results to return. | false | 5 |
api_fetch
Fetch data from an API endpoint.
Item | Type | Description | Required | Defaults |
---|---|---|---|---|
url | string | The API endpoint URL. | true | "" |
method | string | The HTTP method to use (GET, POST, etc.). | false | GET |
headers | object | The headers to include in the request. | false | {} |
params | object | The query parameters to include in the request. | false | {} |
data | object | The data to include in the request body. | false | {} |
json | object | The JSON data to include in the request body. | false | {} |
timeout | integer | The timeout for the request in seconds. | false | 10 |
run_shell_command
Runs a shell command and returns the output.
Item | Type | Description | Required | Defaults |
---|---|---|---|---|
command | string | The shell command to run. | true | "" |
validate_xml
Validates an XML file against a given XSD schema.
This uses
xmllint
cli command to validate the XML file. Make sure to install the command for your operating system.
Item | Type | Description | Required | Defaults |
---|---|---|---|---|
xml_file_path | string | The path to the XML file to validate. | true | "" |
xsd_file_path | string | The path to the XSD schema file. | true | "" |
workspace_path | string | The workspace path to the files to validate. | false | ${workspaceFolder} |
retrieve_embeddings
Fetches the embeddings from the local vector database and uses them to answer questions.
Item | Type | Description | Required | Defaults |
---|---|---|---|---|
input | string | The input to the agent. | true | "" |
modify_embeddings
Updates the embeddings for the specified files.
Item | Type | Description | Required | Defaults |
---|---|---|---|---|
file_paths | string[] | The paths to the files to update. | true | [] |
use_git_ignore | boolean | Whether to use all of the .gitignore files in the workspace to determine which files to update. | false | true |
exclude_file_paths | string[] | The paths to the files to exclude from the update. | false | [] |
Troubleshooting
-
If the MCP server is not making any requests to the LLM server, do the following:
- Restart VS Code as a sanity check.
- Ensure the Ollama LLM server is running and accessible.
- Copy the server files again using the commands above, another sanity check.
- Restart the MCP server in the
.vscode/mcp.json
file. - Create a new chat in GitHub Copilot and switch to
Agent
mode.
-
You can check the logs using:
Windows:
type %homedrive%%homepath%\agentic-workflow-mcp\logs.txt
Mac/Linux:
tail -f ~/agentic-workflow-mcp/logs.txt
-
If you want to clean the local chroma db, you can do by deleting the
chroma_vector_db
folder in your workspace folder. This will delete all the embeddings and you can start fresh. Make sure to restart the MCP server after deleting the folder in order to initialize the empty database.
Using Workflows without MCP Server
If you want to use the workflows without the MCP server, you can do so by directly importing the python classes as used in the file. This way the workflows can be deployed as a web service to a cloud provider using the same configuration files.