03 Technical Architecture
UDIP – Technical Architecture
This document describes the high-level system architecture, component design, communication patterns, and failure isolation strategy for UDIP.
High-Level Architecture
UDIP is composed of three primary layers:
- Frontend (Web UI): React-based single-page application (SPA)
- Orchestration Core (Node.js Backend): Process supervision, event routing, and service coordination
- AI Intelligence Subsystem (Python/FastAPI): AI-powered development assistance and reasoning
These layers communicate via: - WebSocket for real-time updates (frontend ↔ backend) - REST APIs for request-response operations (frontend ↔ backend) - Internal RPC/HTTP for backend-to-AI communication
Architecture Diagram (Visual Description)
For AI Image Generation:
┌─────────────────────────────────────────────────────────────────┐
│ UDIP PLATFORM ARCHITECTURE │
└─────────────────────────────────────────────────────────────────┘
┌───────────────────────────────────────────────────────────────┐
│ FRONTEND LAYER (React) │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────────────┐ │
│ │Dashboard │ │Terminal │ │File Exp. │ │ AI Assistant UI │ │
│ └──────────┘ └──────────┘ └──────────┘ └──────────────────┘ │
└───────────────────────────────────────────────────────────────┘
│ WebSocket + REST │
┌───────────────────────────────────────────────────────────────┐
│ ORCHESTRATION CORE (Node.js) │
│ ┌──────────────┐ ┌───────────┐ ┌──────────────────────┐ │
│ │ Process Mgr │ │ Event │ │ API Gateway │ │
│ │ (PM2-like) │ │ Bus │ │ (Express/Fastify) │ │
│ └──────────────┘ └───────────┘ └──────────────────────┘ │
│ ┌──────────────┐ ┌───────────┐ ┌──────────────────────┐ │
│ │ Log Aggreg. │ │ Deployment│ │ Terminal Server │ │
│ │ Service │ │ Engine │ │ (xterm.js backend) │ │
│ └──────────────┘ └───────────┘ └──────────────────────┘ │
└───────────────────────────────────────────────────────────────┘
│ Internal HTTP/RPC │
┌───────────────────────────────────────────────────────────────┐
│ AI INTELLIGENCE SUBSYSTEM (Python/FastAPI) │
│ ┌──────────────┐ ┌───────────┐ ┌──────────────────────┐ │
│ │ Context Mgr │ │ LLM │ │ Action Executor │ │
│ │ (Project │ │ Interface│ │ (File Ops, Commands)│ │
│ │ Awareness) │ │ │ │ │ │
│ └──────────────┘ └───────────┘ └──────────────────────┘ │
└───────────────────────────────────────────────────────────────┘
│ File System, Processes │
┌───────────────────────────────────────────────────────────────┐
│ SYSTEM LAYER (OS) │
│ File System │ Process Table │ Network │ Resources │
└───────────────────────────────────────────────────────────────┘
Backend Architecture (Node.js Orchestration Core)
Why Node.js for Orchestration?
Node.js is chosen as the orchestration core for the following reasons:
- Event-driven, non-blocking I/O: Ideal for managing multiple long-running processes, WebSocket connections, and I/O-heavy operations
- Process management capabilities: Node.js has robust child process APIs (
child_process,worker_threads) for spawning and supervising services - Real-time communication: Native support for WebSocket servers (Socket.io, ws) for live dashboard updates
- Ecosystem maturity: Rich ecosystem for terminal emulation (node-pty), logging (winston, pino), and system monitoring
- Cross-platform: Works seamlessly on Windows, macOS, and Linux
- Performance: Handles thousands of concurrent connections efficiently
Core Services (Node.js)
1. Process Manager
- Responsibility: Spawn, supervise, restart, and terminate long-running processes
- Technology: Built on
node-ptyfor PTY support, custom process supervisor - Persistence: Stores process state in local database (SQLite/LevelDB)
2. Event Bus
- Responsibility: Centralized event routing for inter-service communication
- Technology: In-memory event emitter or lightweight message queue (Redis optional)
- Subscribers: Alerting, monitoring, AI subsystem
3. API Gateway
- Responsibility: REST API endpoints for frontend requests
- Technology: Express.js or Fastify
- Authentication: JWT-based (future multi-user support)
4. Log Aggregation Service
- Responsibility: Collect, index, and search logs from all services
- Technology: Streams from process stdout/stderr, indexed with Elasticsearch or SQLite FTS
5. Deployment Engine
- Responsibility: Execute deployment workflows (build, test, deploy, rollback)
- Technology: Config-driven state machine, executes shell scripts or containerized tasks
6. Terminal Server
- Responsibility: Provide web-based terminal access via WebSocket
- Technology: xterm.js (frontend) + node-pty (backend)
AI Subsystem Architecture (Python/FastAPI)
Why Python/FastAPI for AI?
Python is used exclusively for the AI intelligence layer, not for orchestration, because:
- LLM ecosystem dominance: Most LLM libraries (OpenAI SDK, LangChain, LlamaIndex, Transformers) are Python-first
- AI tooling maturity: Python has the best support for vector databases, embeddings, and AI frameworks
- FastAPI performance: FastAPI is async-capable and fast enough for AI inference requests
- Isolation: Running AI as a separate service prevents it from blocking the orchestration core
- Language fit: Python is not ideal for long-running process supervision or real-time event handling—Node.js is better suited for that
AI Subsystem Components
1. Context Manager
- Responsibility: Maintain awareness of project files, running services, logs, and state
- Technology: Indexes project files, monitors log streams, caches context in memory/vector DB
2. LLM Interface
- Responsibility: Interface with local or remote LLMs (OpenAI, Anthropic, local models)
- Technology: LangChain or direct SDK calls, supports streaming responses
3. Action Executor
- Responsibility: Execute AI-suggested actions (edit files, run commands, restart services)
- Technology: Communicates with Node.js orchestration core via internal APIs
Frontend Architecture (React/Vite)
Technology Stack
- Framework: React 18+ with TypeScript
- Build Tool: Vite for fast development and optimized production builds
- State Management: Zustand or Redux Toolkit
- Real-time Communication: Socket.io-client for WebSocket connections
- Terminal UI: xterm.js for in-browser terminal emulation
- Code Editor: Monaco Editor (VS Code's editor) or CodeMirror
Frontend Modules
- Dashboard: Real-time status overview
- Project Manager: Service control panel
- Terminal: Full terminal emulation
- File Explorer: Tree view + code editor
- Logs Viewer: Searchable, filterable log stream
- Deployment UI: Workflow progress and history
- AI Chat Interface: Context-aware AI assistant
Communication Patterns
1. Frontend ↔ Backend (Node.js)
- WebSocket: Real-time updates (logs, process status, alerts)
- REST API: CRUD operations (start/stop services, read configs, edit files)
2. Backend (Node.js) ↔ AI Subsystem (Python)
- Internal HTTP/REST: Node.js sends context + query to FastAPI, receives response
- Streaming: FastAPI streams AI responses back to Node.js, which relays to frontend
- Action Callbacks: AI requests actions by calling Node.js APIs
3. Inter-Service Communication (Node.js)
- Event Bus: Publish-subscribe pattern for decoupled service communication
- Direct Calls: Some services call each other directly via in-process function calls (monolithic deployment)
Failure Isolation Strategy
Feature-Level Fault Isolation
UDIP is designed so that failure in one subsystem does not cascade to others.
Isolation Mechanisms:
- Independent Service Processes:
- AI subsystem runs as a separate process—if it crashes, orchestration core continues
-
Terminal server, log aggregator, and deployment engine can be isolated processes
-
Circuit Breakers:
- If AI subsystem is unresponsive, the orchestration core disables AI features but continues operating
-
If log aggregator fails, logs are buffered to disk and ingested when it recovers
-
Graceful Degradation:
- If WebSocket connection fails, frontend falls back to polling
-
If deployment engine is busy, new deployments are queued
-
Health Checks:
- Each subsystem exposes health endpoints
- Orchestration core monitors subsystem health and restarts failed services
Internal Service Boundaries
Monolithic Deployment (Default)
- All Node.js services run in a single process
- AI subsystem runs as a separate process
- Simplifies deployment and reduces overhead
Microservices Deployment (Advanced)
- Each Node.js service can be split into independent processes/containers
- Communicates via internal HTTP or message queue (Redis, RabbitMQ)
- Enables horizontal scaling and resilience
Data Storage
Process State Database
- Technology: SQLite (default) or LevelDB
- Stores: Process configurations, restart policies, deployment history
Logs Storage
- Technology: SQLite FTS (full-text search) or Elasticsearch (advanced)
- Stores: Service logs, system metrics, audit trails
AI Context Database
- Technology: Vector database (ChromaDB, Qdrant) or in-memory cache
- Stores: Project file embeddings, conversation history, context snapshots
Deployment Architecture
Single-Host Deployment (Default)
- UDIP runs on a single machine (developer laptop, VPS, on-premise server)
- All services bound to
localhostor internal network
Multi-Host Deployment (Advanced)
- UDIP central server connects to remote agents on other machines
- Agents relay process state, logs, and terminal access to central server
Security Considerations
- No External Network Exposure (by default):
- UDIP binds to
localhostor private network IPs -
Users can optionally expose via VPN, Tailscale, or CloudFlare Tunnel
-
Authentication (future):
- JWT-based authentication for multi-user environments
-
RBAC for restricting access to sensitive operations
-
Sandboxed Plugins:
- Plugins run in isolated environments with limited permissions
Document Version: 1.0
Last Updated: January 2026