Skip to main content
Page is under construction.
Feedback Welcome!
This is way too long
The Livepeer Gateway supports a dual setup configuration that enables a single node to handle both traditional video transcoding and AI processing workloads simultaneously. This unified architecture reduces infrastructure complexity while providing comprehensive media processing capabilities.

Overview

The Gateway’s dual capability is enabled by its modular architecture, where different managers handle specific workflows while sharing common infrastructure for media ingestion, payment processing, and result delivery. The LivepeerNode struct contains fields for both traditional transcoding (Transcoder, TranscoderManager) and AI processing (AIWorker, AIWorkerManager) livepeernode.go The gateway determines the processing type based on the request:
  • Standard transcoding requests go through the BroadcastSessionsManager
  • AI requests go through the AISessionManager with AI-specific authentication and pipeline selection ai_auth.go
The gateway initializes with two distinct session managers:
// Traditional transcoding session manager
sessManager = NewSessionManager(ctx, s.LivepeerNode, params)
// AI processing session manager
AISessionManager: NewAISessionManager(lpNode, AISessionManagerTTL)
Key Differences
AspectVideo TranscodingAI Pipelines
Processing TypeFormat/bitrate conversionAI model inference
Session ManagerBroadcastSessionsManagerAISessionManager
Payment ModelPer segmentPer pixel processed
ProtocolStandard HLS/DASHTrickle protocol for real-time AI
ComponentsRTMP Server, Playlist ManagerMediaMTX, Trickle Server

Configuration

To configure a gateway to handle both video transcoding and AI processing, you need to set the appropriate flags and options when starting the livepeer binary. Essential Flags To enable dual setup, configure the gateway with the following flags:
FlagDescriptionRequired
-gatewayRun as a gateway node
-httpIngestEnable HTTP ingest for AI requests
-transcodingOptionsTranscoding profiles for video
-aiServiceRegistryEnable AI service registry
See: cmd/livepeer/livepeer.go
Verify all code below here

AI-Specific Configuration

AI flags
-aiModels=${env:HOME}/.lpData/cfg/aiModels.json
-aiModelsDir=${env:HOME}/.lpData/models
-aiRunnerContainersPerGPU=1
-livePaymentInterval=5s

Transcoding Configuration

Note, if the transcodingOptions.json file is not provided, the gateway will use the default transcoding profiles -transcodingOptions=P240p30fps16x9,P360p30fps16x9.
Transcoding flags
# -transcodingOptions=P240p30fps16x9,P360p30fps16x9
-transcodingOptions=${env:HOME}/.lpData/cfg/transcodingOptions.json
-maxSessions=10
-nvidia=all  # or specific GPU IDs

Deployment

For local development and testing purposes, there is no need to connect to the blockchain payments layer.
You will need to run your own orchestrator node for local development.
Off-Chain Gateway Deployment with dual capabilities
livepeer -gateway \
    -httpIngest \
    -transcodingOptions=${env:HOME}/.lpData/offchain/transcodingOptions.json \
    -orchAddr=0.0.0.0:8935 \
    -httpAddr=0.0.0.0:9935 \
    -httpIngest \
    -v=6

    # Verify these
    -aiServiceRegistry \
    -aiModels=${env:HOME}/.lpData/cfg/aiModels.json \
    -aiModelsDir=${env:HOME}/.lpData/models \
    -aiRunnerContainersPerGPU=1 \

Combined Gateway/Orchestrator AI-Enabled Deployment

For nodes that handle both orchestration and AI processing
Combined Gateway/OrchestratorOn-Chain Deployment

    livepeer -orchestrator -aiWorker -aiServiceRegistry \
        -serviceAddr=0.0.0.0:8935 \
        -nvidia=all \
        -aiModels=${env:HOME}/.lpData/cfg/aiModels.json \
        -aiModelsDir=${env:HOME}/.lpData/models \
        -network=arbitrum-one-mainnet \
        -ethUrl=https://arb1.arbitrum.io/rpc \
        -ethPassword=<ETH_SECRET> \
        -ethAcctAddr=<ETH_ACCT_ADDR> \
        -ethOrchAddr=<ORCH_ADDR>

Troubleshooting

Common Issues
  • AI models not loading: Check -aiModelsDir and model file permissions
  • Transcoding failures: Verify GPU drivers and -nvidia configuration
  • Port conflicts: Ensure -rtmpAddr, -httpAddr, and -cliAddr are available
  • Memory pressure: Monitor AI model memory usage, adjust -aiRunnerContainersPerGPU
Debug Commands
    # Check transcoding capabilities
    curl http://localhost:8935/getBroadcastConfig

    # Test AI endpoint
    curl -X POST http://localhost:8935/text-to-image \
    -H "Content-Type: application/json" \
    -d '{"prompt":"test image"}'

    # Monitor logs
    livepeer -gateway -v=6 2>&1 | grep -E "(transcode|AI|segment)"

Example Setup

The box setup for local development demonstrates running a gateway that handles both types of processing.
livepeer/go-livepeer - box/box.mdView on GitHub
Last modified on January 13, 2026