Skip to main content
Page is under construction.
Feedback Welcome!
This page will walk you through the process of deploying and configuring a Livepeer Gateway for AI inference services.

Gateway Modes

You can run a gateway
  • “Off-chain” -> dev or local mode
  • “On-chain” -> production mode connected to the blockchain-based Livepeer network.
If you run your gateway off-chain - you will need to run your own Orchestrator node ie. have access to a GPU and set it up as an Orchestrator, in order to test Gateway functionality.
There is currently no Livepeer “Testnet” available which has Orchestrator offerings, though there are conversations underway to enable this in the future.Do you think Livepeer should have a “testnet” available for Gateways to connect to?Follow & contribute to the discussion in the Discord and on the Forum)

Deploy a Gateway for AI Inference Services

You can run the Livepeer AI software using one of the following methods:
  • Docker (Recommended): The simplest and preferred method.
  • Pre-built Binaries: An alternative if you prefer not to use Docker.

Deploy an AI Gateway

Follow the steps below to start your Livepeer AI Gateway node.
These instructions apply to both on-chain & off-chain Gateway deployments.

For AI processing, the Gateway extends its functionality to handle AI-specific workflows.

go-livepeer/server/ai_mediaserver.go

Key components include:
  • AISessionManager: Manages AI processing sessions and selects appropriate Orchestrators with AI capabilities ai_http.go
  • MediaMTX Integration: Handles media streaming for AI processing
  • Trickle Protocol: Enables efficient streaming for real-time AI video processing
The AI workflow server/ai_process.go involves
  • authenticating AI streams,
  • selecting AI-capable Orchestrators,
  • processing payments based on pixels, and
  • managing live AI pipelines
Last modified on January 13, 2026