Page is under construction.
Feedback Welcome!
Feedback Welcome!
This page is a work in progress.
TODO: Edit, Streamline, Format & Style
TODO: Edit, Streamline, Format & Style
Definition
Gateways serve as the primary demand aggregation layer in the Livepeer network. They accept video transcoding and AI inference requests from end customers, then distribute these jobs across the network of GPU-equipped Orchestrators. In earlier Livepeer documentation, this role was referred to as a broadcaster.What is a Gateway?
Gateways are the entry point for applications into the Livepeercompute network. They are the coordination layer that connects real-time AI and video workloads to the orchestrators who perform the GPU compute. They operate as the essential technical layer between the protocol and the distributed compute network. A gateway is a self-operated Livepeer node that interacts directly with orchestrators, submits jobs, handles payment, and exposes direct protocol interfaces. Hosted services like Daydream are not gateways. A Gateway is responsible for- validating requests
- selecting Workers
- translating requests into Worker OpenAPI calls
- aggregating results
What Gateways Do
Gateways handle all service-level logic required to operate a scalable, low-latency AI video network:-
Job Intake
They receive workloads from applications using Livepeer APIs, PyTrickle, or BYOC integrations. -
Capability & Model Matching
Gateways determine which orchestrators support the required GPU, model, or pipeline. -
Routing & Scheduling
They dispatch jobs to the optimal orchestrator based on performance, availability, and pricing. -
Marketplace Exposure
Gateway operators can publish the services they offer, including supported models, pipelines, and pricing structures.
Gateway Functions & Services
Learn More About Gateway Functions & Services
Why Gateways Matter
As Livepeer transitions into a high-demand, real-time AI network, Gateways become essential infrastructure. They enable:- Low-latency workflows for Daydream, ComfyStream, and other real-time AI video tools
- Dynamic GPU routing for inference-heavy workloads
- A decentralized marketplace of compute capabilities
- Flexible integration via the BYOC pipeline model
Summary
Gateways are the coordination and routing layer of the Livepeer ecosystem. They expose capabilities, price services, accept workloads, and dispatch them to orchestrators for GPU execution. This design enables a scalable, low-latency, AI-ready decentralized compute marketplace. This architecture enables Livepeer to scale into a global provider of real-time AI video infrastructure.Marketplace Content
Marketplace Content
Key Marketplace Features
1. Capability Discovery
Gateways and orchestrators list:- AI model support
- Versioning and model weights
- Pipeline compatibility
- GPU type and compute class
2. Dynamic Pricing
Pricing can vary by:- GPU class
- Model complexity
- Latency SLA
- Throughput requirements
- Region
3. Performance Competition
Orchestrators compete on:- Speed
- Reliability
- GPU quality
- Cost efficiency
- Routing quality
- Supported features
- Latency
- Developer ecosystem fit
4. BYOC Integration
Any container-based pipeline can be brought into the marketplace:- Run custom AI models
- Run ML workflows
- Execute arbitrary compute
- Support enterprise workloads
Protocol Overview
Understand the Full Livepeer Network Design
Marketplace Benefits
- Developer choice โ choose the best model, price, and performance
- Economic incentives โ better nodes earn more work
- Scalability โ network supply grows independently of demand
- Innovation unlock โ new models and pipelines can be added instantly
- Decentralization โ no single operator controls the workload flow
Summary
The Marketplace turns Livepeer into a competitive, discoverable, real-time AI compute layer.- Gateways expose services
- Orchestrators execute them
- Applications choose the best fit
- Developers build on top of it
- Users benefit from low-latency, high-performance AI