Skip to main content
Page is under construction.
Feedback Welcome!
See DeepWiki ComfyStream is a real-time AI video processing system that enables live video transformation using ComfyUI workflows. The system captures video from a webcam or other media source, processes each frame through a ComfyUI workflow graph, and streams the processed output back with sub-second latency.

Key Capabilities:

  • Real-time video processing at 15-30 FPS
  • WebRTC-based streaming for low latency
  • ComfyUI workflow compatibility for flexible AI pipelines
  • TensorRT acceleration for 10x+ performance improvements
  • Multiple deployment modes (Docker, cloud, local development)
  • Bring Your Own Compute (BYOC) orchestration support

Primary Use Cases:

  • Live AI video effects (style transfer, depth estimation, face animation)
  • Real-time image-to-image translation on video streams
  • Interactive AI art generation with webcam input
  • Distributed GPU compute for video processing

Architecture

ComfyStream is organized into six primary architectural layers, each with distinct responsibilities: ComfyStream Architecture

Layer Responsibilities:

LayerComponentsPrimary Function
ClientBrowser, WebcamCapture media input and display output
UIStreamCanvas, Room, SettingsVideo standardization, WebRTC setup, configuration
TransportRTCPeerConnection, MediaTracksReal-time media streaming with WebRTC
Serverapp.py, byoc.pyWebRTC signaling, media track handling, orchestration
ProcessingPipeline, ComfyStreamClientFrame-to-tensor conversion, workflow execution coordination
BackendComfyUI, Custom NodesWorkflow graph execution, AI model inference

Core Components

Last modified on January 13, 2026