arroz4/ado-instruction-mcp-server
If you are the rightful owner of ado-instruction-mcp-server and would like to certify it and/or have it hosted online, please leave a comment on the right or send an email to henry@mcphub.com.
The MCP Server is a modular system designed to convert meeting transcripts, text inputs, and images into structured Azure DevOps work items using Azure OpenAI for advanced image analysis.
ADO Instructions MCP Server
A modular## ๐ผ๏ธ Image Analysis: Process workflow diagrams with dependency arrows using Azure OpenAI โ NEWMCP (Model Context Protocol) server that converts meeting transcripts, text inputs, and images into structured Azure DevOps work items using Azure OpenAI for advanced image analysis.
๐ New: Azure OpenAI Image Processing
The server now includes advanced image analysis capabilities powered by Azure OpenAI's vision models:
- ๐ผ๏ธ Advanced Image Analysis - Uses Azure OpenAI o4-mini model for intelligent image understanding
- ๐ Workflow Diagram Recognition - Identifies project structures and dependency arrows from visual diagrams
- ๏ฟฝ Dependency Arrow Analysis - Parses arrows to determine Epic โ Task parent-child relationships
- ๐๏ธ Hierarchical Structure Detection - Creates proper work item hierarchy from visual flows
Designed for analyzing wireframes, workflow diagrams with dependency arrows, and project documentation images.
๐ Text Analysis and Processing
The server specializes in analyzing text content for requirements and work item generation:
- ๐ Requirements Extraction - Identifies tasks and features from text content
- ๐ Dependency Analysis - Understands relationships between work items
- ๐ Smart Processing - Extracts structured information from unstructured text
- โก Fast Text Analysis - Optimized for meeting notes and requirements documents
Designed for processing meeting transcripts, requirements documents, and project text.
๐ ๏ธ Available Tools
Tool | Input | Output | Description |
---|---|---|---|
process_meeting_transcript | transcript: str | ADO instructions JSON | Process long meeting notes/transcripts |
process_feature_image ๐ | image_base64: str description: str | ADO instructions JSON | AZURE OPENAI: Analyze workflow diagrams and dependency arrows using AI vision |
generate_ado_workitems_from_text | text_input: str project_name: str priority_override: str | ADO instructions JSON | Flexible text-to-ADO conversion |
format_ado_instructions_summary โญ | instructions_json: str | Formatted summary | Format work items for user review |
search_files_for_processing ๐ | search_pattern: str file_types: str search_locations: str | File search results | Find text and image files on PC |
validate_ado_structure | instructions_json: str | Validation report | Validate JSON structure |
get_organization_context | None | Organization details | Get Omar Solutions context |
load_image_from_file ๐ | image_path: str | Base64 image data | Load and convert images for processing |
โจ Features
- ๐ Text Processing: Convert meeting transcripts and notes into structured ADO work items โ WORKING
- ๐ผ๏ธ Image Analysis: Process wireframes and diagrams using Azure OpenAI โ NEW
- ๐ Structured Output: Generate properly hierarchized Epic โ Task relationships โ WORKING
- ๐ฏ Smart Prioritization: Auto-assign priorities based on content analysis โ WORKING
- ๐ Comprehensive Validation: Ensure work item structure correctness โ WORKING
- โญ User Confirmation: Format summaries for review before proceeding โ NEW
- ๐ File Discovery: Find text and image files on PC automatically โ NEW
๐ Quick Start
1. Setup Dependencies
# Clone or navigate to project directory
cd ado_instructions
# Activate virtual environment
.venv\Scripts\Activate.ps1
# Install dependencies (already configured in pyproject.toml)
uv sync
2. Configure Azure OpenAI
Create or update .env
file with your Azure OpenAI credentials:
# Azure OpenAI Configuration
AZURE_OPENAI_ENDPOINT=https://omaragent.openai.azure.com/
AZURE_OPENAI_API_KEY=your-api-key-here
AZURE_OPENAI_API_VERSION=2024-12-01-preview
AZURE_OPENAI_DEPLOYMENT=o4-mini-imagebot
AZURE_OPENAI_MODEL=o4-mini
3. Run the Server
# Start MCP server on default port 2000
python server.py
# Different port
python server.py --port 8001
# Different transport
python server.py --transport stdio
4. Connect from VS Code
Add to your MCP settings:
{
"mcpServers": {
"ado-instructions": {
"command": "python",
"args": ["c:/path/to/your/ado_instructions/server.py"],
"cwd": "c:/path/to/your/ado_instructions"
}
}
}
๐ Usage Examples
1. Basic Text Processing Workflow
# Step 1: Process meeting transcript
result = process_meeting_transcript(
transcript="Team discussed new user authentication system. Need login page, password reset, user registration, and admin dashboard. High priority for login and registration. Medium priority for password reset. Low priority for admin dashboard."
)
# Step 2: Format summary for user review
summary = format_ado_instructions_summary(result)
print(summary)
# Step 3: Present to user and ask for confirmation
# User reviews the formatted output and confirms/requests changes
# Step 4: Proceed with validated work items
validated_json = result # If user approved
2. ๐ Azure OpenAI Image Processing Workflow
# Step 1: Load image from file
image_data = load_image_from_file(
image_path="c:/path/to/wireframe.png"
)
# Step 2: Process image with Azure OpenAI
result = process_feature_image(
image_base64=image_data,
description="Wireframe showing user authentication flow with login, registration, and dashboard components"
)
# Step 3: Format summary for user review
summary = format_ado_instructions_summary(result)
print(summary)
# Step 4: Present to user and ask for confirmation
# User reviews the formatted output and confirms/requests changes
# Step 5: Proceed with validated work items
validated_json = result # If user approved
3. Flexible Text Input Workflow
# Step 1: Convert any text to ADO work items
result = generate_ado_workitems_from_text(
text_input="Build a mobile app with user profiles, messaging, and notifications",
project_name="Mobile App Development",
priority_override="High"
)
# Step 2: Format summary for user review
summary = format_ado_instructions_summary(result)
print(summary)
# Step 3: Present to user and ask for confirmation
# User reviews the formatted output and confirms/requests changes
# Step 4: Proceed with validated work items
validated_json = result # If user approved
๐ Dependency Arrow Analysis for Workflow Diagrams
The process_feature_image tool includes advanced visual analysis to understand workflow dependencies from arrows and connections in diagrams:
๐ Visual Pattern Recognition
- Arrow Direction Analysis:
Parent โ Child
relationships are identified from arrow flow - Hierarchical Structure Detection: Creates proper Epic โ Task relationships based on visual hierarchy
- Dependency Chain Mapping: Follows arrow sequences to understand process flows
๐ฏ Supported Workflow Patterns
Visual Pattern | Interpretation | ADO Structure |
---|---|---|
Database โ Website โ Frontend | Sequential dependencies | Epic: Database Implementation Tasks: Website, Frontend |
Login โ Dashboard โ Reports | Reverse dependency flow | Epic: Dashboard Tasks: Login, Reports |
API โ Database | Bidirectional dependency | Epic: System Integration Tasks: API, Database |
๐ Example: Workflow Diagram Processing
Input Image: [Database] โ [Website] โ [Frontend/Backend]
Expected Output:
- Epic: "Database Implementation" (parent)
- Task: "Build Website" (child of Database)
- Task: "Develop Frontend" (child of Website)
- Task: "Develop Backend" (child of Website)
โ ๏ธ Important:
When processing workflow diagrams, the system should:
- Identify the root Epic from the leftmost/topmost element
- Follow arrows to determine task dependencies
- Create hierarchical structure where arrows indicate parent โ child relationships
- Avoid creating multiple separate Epics unless the diagram shows parallel workflows
4. File Discovery and Processing Workflow (๐ NEW)
# Step 1: Search for files on the PC
search_results = search_files_for_processing(
search_pattern="wireframe",
file_types="images,text",
search_locations="desktop,documents"
)
# Step 2: Process found files
# User selects files from search results
# Load content and process with appropriate tool
๐๏ธ Project Structure
ado_instructions/
โโโ server.py # Main MCP server (FastMCP 2.11.2)
โโโ pyproject.toml # Dependencies and project config
โโโ uv.lock # Dependency lock file
โโโ README.md # This documentation
โโโ .env # Environment variables (Azure OpenAI config)
โโโ modules/
โ โโโ __init__.py # Module initialization
โ โโโ models.py # Data classes and enums
โ โโโ config.py # Environment and organization setup
โ โโโ text_processor.py # Text analysis and feature extraction
โ โโโ image_processor.py # ๐ Azure OpenAI image analysis
โ โโโ ado_generator.py # ADO work item generation
โ โโโ file_search.py # File discovery functionality
โ โโโ error_handling.py # Comprehensive error management
โ โโโ common_utils.py # Shared utilities
โโโ tests/
โโโ test_optimization.py # Performance tests
โโโ test_utils.py # Utility tests
โโโ images/ # Test image files
โโโ text/ # Test text files
๐ Dependencies
Core Dependencies (5 packages total)
- โ fastmcp: MCP framework (2.11.2)
- โ uvicorn: ASGI server
- โ fastapi: Web framework
- โ python-dotenv: Environment management
- โ openai: Azure OpenAI SDK for image processing ๐
Module Status
- โ Error handling (fallback processing)
- โ Common utilities (shared functions)
- โ Models (data classes and enums)
- โ Configuration (environment setup)
- โ Text processing (feature extraction)
- โ ๐ Image processing (Azure OpenAI integration)
- โ ADO generation (work item creation)
๐ MCP Tools Reference
Tool | Input | Output | Description |
---|---|---|---|
process_meeting_transcript | transcript: str | ADO instructions JSON | Process long meeting notes/transcripts |
process_feature_image ๐ | image_base64: str description: str | ADO instructions JSON | Analyze images using Azure OpenAI vision |
generate_ado_workitems_from_text | text_input: str project_name: str priority_override: str | ADO instructions JSON | Flexible text-to-ADO conversion |
format_ado_instructions_summary โญ | instructions_json: str | Formatted summary | NEW: Format work items for user review |
validate_ado_structure | instructions_json: str | Validation report | Validate JSON structure |
get_organization_context | None | Organization details | Get Omar Solutions context |
search_files_for_processing | search_pattern: str file_types: str search_locations: str | File search results | Find text and image files on PC |
load_image_from_file ๐ | image_path: str | Base64 image data | Load and convert images for processing |
Tool-Specific Details
process_meeting_transcript
- Purpose: Convert meeting notes/transcripts to ADO work items
- Input: Long text containing meeting discussion
- Output: Structured JSON with Epics and Tasks
- Use Case: Transform unstructured meeting notes into organized work items
๐ process_feature_image
(NEW - Azure OpenAI)
- Purpose: Analyze images using Azure OpenAI's advanced vision capabilities
- Input: Base64 image data + optional description
- Output: Structured JSON with Epics and Tasks
- Use Case: Extract requirements from wireframes, diagrams, and visual project documentation
- Technology: Azure OpenAI o4-mini model with vision capabilities
generate_ado_workitems_from_text
- Purpose: Flexible text-to-ADO conversion
- Input: Any text description + optional project name and priority
- Output: Structured JSON with Epics and Tasks
- Use Case: Quick conversion of requirements text to work items
โญ format_ado_instructions_summary
(NEW)
- Purpose: Format work items for user review and confirmation
- Input: ADO instructions JSON from any generation tool
- Output: User-friendly formatted summary with project structure
- Use Case: Present work items to user for approval before proceeding
๐ search_files_for_processing
(NEW)
- Purpose: Find text and image files on PC for processing
- Input: Search pattern, file types, locations
- Output: List of found files with paths and metadata
- Use Case: Discover relevant files without knowing exact paths
๐ load_image_from_file
(NEW)
- Purpose: Load and convert local images to base64 format
- Input: Absolute path to image file
- Output: Base64 encoded image data
- Use Case: Prepare local images for Azure OpenAI processing
๐ง Module Details
๐ modules/models.py
- WorkItem: Data class for individual work items
- ADOInstructions: Container for complete project structure
- WorkItemType: Enum (Epic, Task, User Story, Bug)
- Priority: Enum (Low, Medium, High, Critical)
- ORGANIZATION_CONTEXT: Omar Solutions configuration
โ๏ธ modules/config.py
- setup_environment(): Load .env variables including Azure OpenAI config
- get_organization_context(): Access organization data
- get_azure_openai_config(): ๐ Azure OpenAI configuration management
๐ modules/text_processor.py
- extract_features_from_text(): Extract project features from text
- extract_requirements_from_text(): Find specific requirements
- determine_priority_from_text(): Auto-assign priorities
๐ผ๏ธ modules/image_processor.py
๐ NEW
- process_image_with_azure_openai(): Analyze images using Azure OpenAI
- extract_features_from_image(): Extract project features from images
- analyze_workflow_diagram(): Specialized diagram analysis
- format_image_analysis_result(): Format AI analysis into ADO structure
๐๏ธ modules/ado_generator.py
- create_epic_from_feature(): Generate Epic work items
- create_task_from_requirement(): Generate Task work items
- generate_ado_instructions(): Complete instruction generation
- format_ado_summary(): Format work items for user review
๐ modules/file_search.py
โญ NEW
- search_files_for_processing(): Find text and image files on PC
- format_search_results_for_display(): User-friendly result formatting
- get_search_usage_examples(): Usage documentation and examples
๐ ๏ธ Advanced Configuration
Azure OpenAI Setup
# .env file configuration
AZURE_OPENAI_ENDPOINT=https://omaragent.openai.azure.com/
AZURE_OPENAI_API_KEY=your-api-key-here
AZURE_OPENAI_API_VERSION=2024-12-01-preview
AZURE_OPENAI_DEPLOYMENT=o4-mini-imagebot
AZURE_OPENAI_MODEL=o4-mini
Server Options
# Custom port
python server.py --port 8001
# Different transport
python server.py --transport stdio
Azure OpenAI Configuration Example
from openai import AzureOpenAI
client = AzureOpenAI(
api_version="2024-12-01-preview",
azure_endpoint="https://omaragent.openai.azure.com/",
api_key=subscription_key
)
# Example image analysis call
response = client.chat.completions.create(
messages=[
{
"role": "system",
"content": "You are an expert at analyzing project diagrams and wireframes to extract requirements.",
},
{
"role": "user",
"content": [
{
"type": "text",
"text": "Analyze this wireframe and extract the project requirements"
},
{
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{image_base64}"
}
}
]
}
],
max_completion_tokens=40000,
model="o4-mini-imagebot"
)
๐งช Testing Dependency Arrow Analysis
โ Test Case 1: Linear Workflow
Input Diagram: [Database] โ [Website] โ [Frontend]
Expected Output:
- Epic: "Database Implementation"
- Task: "Build Website" (depends on Database)
- Task: "Develop Frontend" (depends on Website)
โ Test Case 2: Parallel Dependencies
Input Diagram: [API] โ [Frontend]
โ [Backend]
Expected Output:
- Epic: "API Implementation"
- Task: "Develop Frontend" (depends on API)
- Task: "Develop Backend" (depends on API)
โ Test Case 3: Complex Workflow
Input Diagram: [Database] โ [API Layer] โ [Authentication] โ [User Interface]
Expected Output:
- Epic: "Database Implementation"
- Task: "Build API Layer" (depends on Database)
- Task: "Implement Authentication" (depends on API)
- Task: "Create User Interface" (depends on Authentication)
๐ Testing Your Workflow Diagrams
- Create test images with clear arrows showing dependencies
- Use descriptive context: Include description explaining the workflow
- Verify hierarchy: Check that ONE Epic is created with proper Task dependencies
- Review dependency chains: Ensure Tasks reference their dependencies
# Test workflow diagram processing
result = process_feature_image(
image_base64=your_workflow_diagram,
description="Website development workflow: Database โ Website โ Frontend/Backend dependencies"
)
# Should generate:
# - 1 Main Epic (not multiple Epics)
# - Tasks with dependency context
# - Proper workflow sequence preserved
โ Common Issues to Avoid
- Multiple Epics: Should create ONE Epic from workflow root
- Missing Dependencies: Tasks should reference what they depend on
- Wrong Hierarchy: Follow arrows for parent โ child relationships
- Lost Sequence: Preserve workflow order shown by arrows )
### Dependency Management
The server gracefully handles missing dependencies:
- **Azure OpenAI**: Falls back to text-only processing if API key unavailable
- **Image Processing**: Uses fallback methods if Azure OpenAI is unavailable
- **All modules**: Comprehensive error handling with fallbacks
## ๐ Troubleshooting
| Issue | Solution |
|-------|----------|
| **ModuleNotFoundError** | Activate virtual environment: `.venv\Scripts\Activate.ps1` |
| **Azure OpenAI API errors** | Check API key and endpoint configuration in .env |
| **Image processing failures** | Verify Azure OpenAI deployment and model availability |
| **Port already in use** | Change port: `python server.py --port 8001` |
| **Import errors** | Check module structure and relative imports |
| **Text processing errors** | Verify input format and content |
## ๐ Production Deployment
### Azure Container Apps Deployment
The server is designed to run in Azure Container Apps with proper containerization and dependency management.
#### ๐ง **Key Deployment Configurations**
```dockerfile
# Dockerfile - Fixed configuration for Azure Container Apps
FROM python:3.11-slim
ENV PYTHONUNBUFFERED=1
ENV PYTHONDONTWRITEBYTECODE=1
ENV CONTAINER_ENV=true
WORKDIR /app
# Use uv for fast dependency management
COPY --from=ghcr.io/astral-sh/uv:latest /uv /uvx /usr/local/bin/
COPY pyproject.toml uv.lock ./
RUN uv sync --frozen --no-cache
COPY . .
# Critical: Use 0.0.0.0 for container networking
CMD ["uv", "run", "python", "server.py", "--port", "3000", "--host", "0.0.0.0"]
๐ ๏ธ Environment Variables for Azure Container Apps
# Required Azure OpenAI Configuration
AZURE_OPENAI_ENDPOINT=https://your-endpoint.openai.azure.com/
AZURE_OPENAI_API_KEY=your-api-key
AZURE_OPENAI_API_VERSION=2024-12-01-preview
AZURE_OPENAI_DEPLOYMENT=o4-mini-imagebot
AZURE_OPENAI_MODEL=o4-mini
# Container Environment Detection
CONTAINER_ENV=true
๐ Deployment Steps
-
Build and Tag Image
docker build -t your-registry.azurecr.io/ado-instructions-server:v1.2.0 .
-
Push to Azure Container Registry
az acr login --name your-registry docker push your-registry.azurecr.io/ado-instructions-server:v1.2.0
-
Update Container App
az containerapp update \ --name ado-instructions-server \ --resource-group YourResourceGroup \ --image your-registry.azurecr.io/ado-instructions-server:v1.2.0
โ ๏ธ Common Deployment Issues Solved
Issue | Root Cause | Solution |
---|---|---|
503 Service Unavailable | Server binding to localhost instead of 0.0.0.0 | Use --host 0.0.0.0 in CMD |
ModuleNotFoundError: openai | Dependencies not loaded in container | Use uv run python server.py |
Port mismatch | Default port 2000 vs container port 3000 | Explicitly set --port 3000 |
Environment detection | Local .env vs Azure environment variables | Smart environment detection in config.py |
โ Deployment Verification
After deployment, verify the server is running:
# Check container app status
az containerapp show --name ado-instructions-server --resource-group YourResourceGroup --query "properties.runningStatus"
# Check logs for successful startup
az containerapp logs show --name ado-instructions-server --resource-group YourResourceGroup --tail 20
# Expected log output:
# โ
Azure OpenAI configured - Image processing available
# INFO: Uvicorn running on http://0.0.0.0:3000
๐งฉ Deployment Challenges Solved
Challenge 1: Network Binding in Containers
- Problem: Server defaulted to
localhost
, preventing external connections in Azure Container Apps - Root Cause: Container networking requires binding to
0.0.0.0
to accept traffic from container orchestrator - Solution: Changed default host from
"localhost"
to"0.0.0.0"
in server.py argument parsing - Impact: Fixed 503 Service Unavailable errors
Challenge 2: Dependency Loading in Containers
- Problem:
ModuleNotFoundError: No module named 'openai'
when runningpython server.py
directly - Root Cause: Container environment needed proper dependency isolation and loading
- Solution: Use
uv run python server.py
instead of directpython server.py
execution - Impact: Ensures all dependencies are available in container runtime
Challenge 3: Port Configuration Consistency
- Problem: Dockerfile used port 3000, but server default was 2000
- Root Cause: Mismatched port configurations between container and server defaults
- Solution: Explicitly specify
--port 3000
in Dockerfile CMD instruction - Impact: Consistent port usage across all deployment methods
Challenge 4: Environment Variable Detection
- Problem: Server tried to load .env file in container environment
- Root Cause: No environment detection logic to differentiate local vs cloud deployment
- Solution: Added smart environment detection in config.py using Azure-specific environment variables
- Impact: Proper configuration loading for both local development and Azure deployment
๐ Before vs After Deployment Fix
Aspect | Before (Failing) | After (Working) |
---|---|---|
Host Binding | localhost (container internal) | 0.0.0.0 (accepts external traffic) |
Dependency Loading | python server.py (missing modules) | uv run python server.py (isolated environment) |
Port Configuration | Mixed (2000 default, 3000 exposed) | Consistent 3000 throughout |
Environment Detection | Always tries .env file | Smart detection: .env for local, env vars for Azure |
Container Status | 503 Service Unavailable | โ Running successfully |
๐ MCP Configuration
Update your local MCP configuration to use the deployed server:
{
"servers": {
"ado-instruction-server": {
"url": "https://your-app.azurecontainerapps.io/mcp/",
"type": "http"
}
}
}
Local Development vs Production
Environment | Host | Port | Dependencies | Environment Variables |
---|---|---|---|---|
Local | localhost or 0.0.0.0 | 2000 (default) | uv run recommended | .env file |
Azure Container Apps | 0.0.0.0 (required) | 3000 | uv run (required) | Azure Container App settings |
๐ฏ Best Practices
โญ NEW: File Discovery Workflow
- โ Use search tool first: Find files without knowing exact paths
- โ Filter by type: Search for "images", "text", or "all" file types
- โ Multiple locations: Search desktop, documents, downloads simultaneously
- โ Pattern matching: Use wildcards or keywords to find specific files
- โ Check file sizes: Avoid processing very large files
โญ NEW: User Confirmation Workflow
- โ Always format and present work items after generation
- โ Ask user confirmation: "Are these instructions correct?"
- โ Offer modifications if user wants changes
- โ Document any changes requested by the user
- โ Validate final output before proceeding
For Text Inputs:
- โ Use action-oriented language ("build", "create", "implement")
- โ Mention specific technologies or features
- โ Include requirements and acceptance criteria
- โ Test with various input lengths
๐ For Image Inputs:
- โ Use clear wireframes or diagrams
- โ Provide descriptive context in the description field
- โ Supported formats: PNG, JPG, GIF, BMP, WebP
- โ Keep images under 10MB for optimal processing
- โ Ensure good contrast and readable text in diagrams
- โ Include workflow arrows and clear component relationships
For Development:
- โ Follow modular architecture principles
- โ Add comprehensive error handling
- โ Include type hints and docstrings
- โ Test modules independently
๐ Next Steps
- Enhanced Azure OpenAI Integration: Improve image analysis accuracy and speed
- Advanced Visual Analysis: Better UI component detection in wireframes
- Multi-Modal Processing: Combine text and image inputs for richer analysis
- ADO Integration: Direct API integration for work item creation
- AI Enhancement: LLM-powered requirement analysis
- Web Interface: Optional web UI for easier testing
๐ Successfully Tested Features
- โ Text Processing: Handles complex project descriptions
- โ Modular Architecture: All modules working independently (6 modules)
- โ Server Startup: Runs on configured port (2000) with 8 tools
- โ Error Handling: Graceful fallbacks for missing dependencies
- โ MCP Integration: Compatible with VS Code MCP framework
- โ ADO Generation: Produces valid work item structures
- โ โญ NEW: Formatted Summaries: User-friendly work item presentation
- โ โญ NEW: User Confirmation Workflow: Ask before proceeding
- โ โญ NEW: Priority Display: Clear "High/Medium/Low" labels
- โ ๐ NEW: File Search: Find images and text files on PC automatically
- โ ๐๏ธ NEW: Clean Server Architecture: Modularized file search functionality
- โ ๐ NEW: Azure OpenAI Integration: Advanced image analysis with o4-mini
๐ Current Tool Count: 8 MCP Tools Available
process_meeting_transcript
- Convert meeting notes to ADO work items- ๐
process_feature_image
- NEW: Analyze images using Azure OpenAI generate_ado_workitems_from_text
- Flexible text-to-ADO conversion- โญ
format_ado_instructions_summary
- NEW: Format work items for user review - ๐
search_files_for_processing
- NEW: Find images/text files on PC validate_ado_structure
- Verify JSON structure correctnessget_organization_context
- Get Omar Solutions context information- ๐
load_image_from_file
- NEW: Load and convert images for processing
Ready for advanced image processing with Azure OpenAI! ๐ ๐ผ๏ธ โจ