Streaming Technologies Overview

Streaming Technologies Overview

This guide provides an overview of common streaming technologies, including their key features, use cases, and whether segmentation is required.

1. MPEG-DASH (Dynamic Adaptive Streaming over HTTP)

Overview:

MPEG-DASH is an open standard for adaptive bitrate streaming developed by ISO/IEC. Similar to HLS but more flexible and codec-agnostic.

  • Uses MP4 files (sometimes CMAF segments) as chunks.
  • Uses an MPD (Media Presentation Description) file as the manifest for describing available video/audio representations.

Key Features:

  • Supports codecs like H.264, H.265, VP9, AV1, etc.
  • Allows configurable segment durations for low-latency streaming.
  • Supports DRM (Digital Rights Management).

Segmentation:

Required—MPEG-DASH requires segmented chunks (e.g., MP4 fragments).

2. HLS (HTTP Live Streaming)

Overview:

HLS is developed by Apple and widely used on iOS/macOS devices. It divides video into small segments (chunks).

  • Segments typically range from 2-10 seconds.
  • Uses M3U8 playlist files as manifests to list available bitrates and chunks.

Key Features:

  • Supports adaptive bitrate streaming (ABR).
  • Works seamlessly with CDNs for scalable delivery.
  • Supports live streaming and VOD (video on demand).

Segmentation:

Required—HLS requires video content to be divided into chunks.

3. CMAF (Common Media Application Format)

Overview:

CMAF is used for low-latency streaming and supports both HLS and MPEG-DASH.

  • Uses fragmented MP4 (fMP4) as a unified format for both protocols.
  • Reduces storage costs by sharing encoded content across formats.

Key Features:

  • Supports chunked transfer encoding for sub-second latency.
  • Ideal for low-latency live streams.

Segmentation:

Required—CMAF requires segmentation into chunks.

4. RTMP (Real-Time Messaging Protocol)

Overview:

RTMP was developed by Adobe and is used for live stream ingest.

  • Uses persistent TCP connections for real-time streaming.
  • Commonly used for live broadcasting to streaming platforms like Twitch and YouTube.

Key Features:

  • Low latency (~2-3 seconds).
  • Still widely used for live streaming ingestion, especially in OBS workflows.

Segmentation:

Not required—RTMP sends a continuous stream instead of segmented chunks.

5. WebRTC (Web Real-Time Communication)

Overview:

WebRTC is used for peer-to-peer, real-time communication, such as video conferencing.

  • Supports ultra-low latency (sub-second).
  • Primarily used for interactive live streaming rather than VOD.

Key Features:

  • Can use UDP-based data channels for speed.
  • No CDN needed, but scaling can be achieved using SFUs (Selective Forwarding Units).

Segmentation:

Not required—WebRTC streams continuous packets.

6. SRT (Secure Reliable Transport)

Overview:

SRT is an open-source protocol designed to handle unreliable networks.

  • Great for broadcasting live streams over long distances or poor connections.
  • Uses UDP but retransmits lost packets, combining speed with reliability.

Key Features:

  • Handles packet loss and fluctuating bandwidth.
  • Supports end-to-end encryption for secure transport.

Segmentation:

Not required—SRT streams video continuously.

Comparison Table

Protocol Use Case Latency Segmentation Common Formats
MPEG-DASH VOD, live streaming 2-6 seconds Yes .mp4, .m4s (CMAF)
HLS VOD, live streaming 3-10 seconds Yes .ts, .m4s (CMAF)
CMAF Low-latency streaming Sub-second Yes .m4s (fragmented MP4)
RTMP Live stream ingest ~2-3 seconds No Continuous stream
WebRTC Real-time communication Sub-second No Continuous packets
SRT Live stream contribution ~1 second No Continuous stream