Video Streaming

Share camera feeds and broadcast live video to the team.

8 min read

How It Works

GroundWave video provides two complementary capabilities in a single Video panel:

  1. Shared video feeds — operators share URLs to IP cameras, drones, or surveillance systems (HLS, RTSP, direct). All connected clients can view these feeds.
  2. Live video broadcasts — an operator broadcasts their device camera via WebRTC peer-to-peer. Other users watch the live stream directly from the broadcaster's device.

Key design decisions:

  • Shared feeds are database-backed (persisted, survive restarts). Live broadcasts are in-memory state (transient, like voice channels).
  • RTSP feeds are proxied through mediamtx + FFmpeg to produce HLS for browser playback.
  • Live broadcasts use P2P WebRTC with no STUN/TURN servers (LAN assumption).
  • Single broadcaster at a time — like the voice PTT single-transmitter model, only one client may be live at a time.
  • Feature toggle gated via FEATURES_ENABLED.

Shared Video Feeds

Operators add video feeds by providing a URL and name. The server auto-detects the feed type from the URL:

  • .m3u8 URLs — treated as HLS and played via hls.js with a Safari native fallback.
  • rtsp:// or rtsps:// URLs — treated as RTSP and proxied through mediamtx to HLS before playback.
  • rtmp:// URLs — treated as RTMP and proxied through mediamtx to HLS before playback. Supports both push and pull models.
  • Everything else — treated as Direct and played via a native <video> element.

Feeds are stored in the video_feeds database table and synced in real-time via Socket.IO. All mutations (create, update, delete) are broadcast to connected clients immediately.

Feed type is auto-detected from the URL but can be overridden in the Add Feed form. For IP cameras that serve both RTSP and HLS, prefer HLS URLs when available — they avoid the additional mediamtx proxy step.

RTSP Proxy Pipeline

RTSP feeds cannot be played directly in a browser. GroundWave routes them through a server-side pipeline that converts the stream to HLS segments that any browser can consume.

  1. A mediamtx container (bluenviron/mediamtx:latest-ffmpeg) runs alongside the app container.
  2. When a user opens an RTSP feed, mediamtx's runOnDemand hook starts FFmpeg automatically.
  3. FFmpeg pulls the RTSP or RTSPS stream from the IP camera.
  4. FFmpeg copies the video track without transcoding and drops the audio track entirely, avoiding codec compatibility issues.
  5. FFmpeg pushes the cleaned stream to mediamtx's local RTSP server over the loopback interface.
  6. mediamtx converts the RTSP input to HLS segments on disk.
  7. Nginx proxies /rtsp-proxy/{feedId}/index.m3u8 to the mediamtx HTTP server, making the HLS stream available to browser clients.
  8. hls.js in the browser fetches and plays the HLS playlist.
  9. FFmpeg stops automatically 30 seconds after the last viewer disconnects, releasing CPU and network resources.

The FFmpeg intermediary provides better compatibility with non-standard cameras (e.g., UniFi Protect) that use unusual RTP packetization. It also strips problematic multi-audio tracks that can crash the HLS muxer.

RTMP Streaming

GroundWave supports RTMP (Real-Time Messaging Protocol) for ingesting live video from external encoders like OBS Studio or FFmpeg. RTMP feeds are automatically converted to HLS for browser playback through the same mediamtx pipeline used for RTSP.

Push Model (Encoder → Server)

External encoders can push RTMP streams directly to the GroundWave server on port 1935. This is the most common setup for streaming from OBS Studio, hardware encoders, or FFmpeg.

# Push from FFmpeg to GroundWave
ffmpeg -i input.mp4 -c copy -f flv rtmp://groundwave-server:1935/stream/my-feed

# OBS Studio stream settings
Server:  rtmp://groundwave-server:1935/stream
Stream Key: my-feed

Pull Model (Server → Remote Source)

GroundWave can also pull RTMP streams from remote sources. When an operator adds a feed with an rtmp:// URL, mediamtx uses FFmpeg to pull the remote stream and convert it to HLS segments.

RTMP feeds show a purple RTMP badge in the VideoPanel. Port 1935 is exposed in Docker Compose for external encoder access. No additional containers or database migrations are required — RTMP support reuses the existing mediamtx infrastructure.

Live Video Broadcasts

Any operator or admin can go live directly from their device camera using WebRTC peer-to-peer connections. Video data never touches the server — it flows directly between the broadcaster's device and each viewer's device.

The broadcast flow:

  1. Operator clicks Go Live — the browser requests camera permission, preferring the environment-facing (rear) camera on mobile devices.
  2. A camera preview appears locally in the VideoPanel. The broadcast is registered with the server, which records the broadcaster's identity in memory.
  3. The server emits video:broadcast-start to all connected clients, notifying them that a live stream is available.
  4. A viewer clicks Watch — their client emits video:viewer-request to the server.
  5. The server relays the request to the broadcaster, which creates an RTCPeerConnection and generates an SDP offer.
  6. The offer is relayed via the server to the viewer using the video:offer event.
  7. The viewer creates an SDP answer and sends it back to the broadcaster via the server using video:answer.
  8. ICE candidates are exchanged between broadcaster and viewer through the server relay using video:ice-candidate events.
  9. Once ICE negotiation completes, the video stream flows directly between devices over the local network — not through the server.

Video broadcasts do not include audio. Voice communications are handled separately via the PTT system, allowing teams to coordinate voice and video independently.

User Interface

The VideoPanel is a slide-out panel accessible from the main toolbar via the video camera icon. It contains all video controls in a single view.

  • Go Live button (operator/admin only) — requests camera permission and starts broadcasting the device camera to all connected clients.
  • Feed and broadcast list — displays all available video sources. Each entry shows a name and a type badge indicating the source type.
  • Add Feed form (collapsible, operator/admin only) — accepts a feed name and URL. Feed type is auto-detected but can be overridden.
  • Embedded player — appears below the list when a feed or broadcast is selected. Includes a fullscreen toggle.

Type badges distinguish video sources at a glance:

  • HLS — blue badge
  • RTSP — amber badge
  • RTMP — purple badge
  • Direct — green badge
  • LIVE — pulsing red badge, shown on active broadcasts

Permissions

Video access is governed by the same RBAC system used across all GroundWave features.

Capability Observer Operator Admin
View feed list Yes Yes Yes
Watch feeds/broadcasts Yes Yes Yes
Add/edit/delete feeds No Yes Yes
Start live broadcast No Yes Yes
Bulk delete feeds No No Yes

REST API

Method Path Auth Description
GET /api/video-feeds Any authenticated List all feeds
POST /api/video-feeds Operator/Admin Create feed
PATCH /api/video-feeds/:id Operator/Admin Update feed
DELETE /api/video-feeds/:id Operator/Admin Delete feed

Socket.IO Events

Feed Events

Event Direction Description
video:feed-created Server → all New feed added. Payload: feed object
video:feed-updated Server → all Feed modified. Payload: updated feed
video:feed-removed Server → all Feed deleted. Payload: { id }

Broadcast Events

Event Direction Description
video:broadcast-start Server → all Broadcast started. Payload: { broadcaster_id, broadcaster_callsign }
video:broadcast-stop Server → all Broadcast ended. Payload: { broadcaster_id }
video:viewer-request Client → Server Viewer wants to watch. Server relays to broadcaster
video:offer Server → viewer SDP offer from broadcaster
video:answer Server → broadcaster SDP answer from viewer
video:ice-candidate Bidirectional ICE candidate relay between broadcaster and viewer

Feature Toggle

Video is an opt-in feature controlled by the FEATURES_ENABLED environment variable. It is not active by default.

# docker-compose.yml environment section
FEATURES_ENABLED=chat,markers,files,overlays,voice,video

When video is not listed in FEATURES_ENABLED:

  • The VideoPanel is not rendered in the client UI.
  • All video:* Socket.IO event handlers are unregistered on the server.
  • No camera permission request is issued to the browser.
  • Video feed REST endpoints return 404.

On constrained hardware like Raspberry Pi, each RTSP proxy stream adds CPU load from FFmpeg transcoding. Live WebRTC broadcasts have minimal server overhead since video flows peer-to-peer. Benchmark with the resource benchmarking suite (scripts/benchmark/) before enabling many simultaneous RTSP feeds.