Uh oh!
There was an error while loading.Please reload this page.
- Notifications
You must be signed in to change notification settings - Fork51
Home
This wiki provides comprehensive information on configuring and using thepi-webrtc for video streaming, along with detailed technical insights on encoders, signaling protocols, and recording options.
The camera control options are fully compatible with officialrpicam-apps. Just follow theinstructions.
Available flags forpi-webrtc:
| Option | Default | Description |
|---|---|---|
-h,--help | Display the help message. | |
--camera | libcamera:0 | Specify the camera using V4L2 or Libcamera. e.g. "libcamera:0" for Libcamera, "v4l2:0" for V4L2 at/dev/video0. |
--v4l2-format | mjpeg | The input format (i420,yuyv,mjpeg,h264) of the V4L2 camera. |
--uid | The unique id to identify the device. | |
--fps | 30 | Specify the camera frames per second. |
--width | 640 | Set camera frame width. |
--height | 480 | Set camera frame height. |
--rotation | 0 | Set the rotation angle of the camera (0, 90, 180, 270). |
--sample-rate | 44100 | Set the audio sample rate (in Hz). |
--no-audio | false | Runs without audio source. |
--sharpness | 1.0 | Adjust the sharpness of the libcamera output in range 0.0 to 15.99. |
--contrast | 1.0 | Adjust the contrast of the libcamera output in range 0.0 to 15.99. |
--brightness | 0.0 | Adjust the brightness of the libcamera output in range -1.0 to 1.0. |
--saturation | 1.0 | Adjust the saturation of the libcamera output in range 0.0 to 15.99. |
--ev | 0.0 | Set the EV (exposure value compensation) in range -10.0 to 10.0. |
--shutter | 0 | Set manual shutter speed in microseconds (0 = auto). |
--gain | 1.0 | Set manual analog gain (0 = auto). |
--metering | centre | Metering mode: centre, spot, average, custom. |
--exposure | normal | Exposure mode: normal, sport, short, long, custom. |
--awb | auto | AWB mode: auto, incandescent, tungsten, fluorescent, indoor, daylight, cloudy, custom. |
--awbgains | 1.0,1.0 | Custom AWB gains as comma-separated Red, Blue values. e.g. '1.2,1.5'. |
--denoise | auto | Denoise mode: off, cdn_off, cdn_fast, cdn_hq, auto. |
--tuning-file | Name of camera tuning file to use, omit this option for libcamera default behaviour. | |
--autofocus-mode | default | Autofocus mode: default, manual, auto, continuous. |
--autofocus-range | normal | Autofocus range: normal, macro, full. |
--autofocus-speed | normal | Autofocus speed: normal, fast. |
--autofocus-window | 0.0,0.0,1.0,1.0 | Autofocus window as x,y,width,height. e.g. '0.3,0.3,0.4,0.4'. |
--lens-position | default | Set the lens to a particular focus position, "0" moves the lens to infinity, or "default" for the hyperfocal distance. |
--record-mode | both | Recording mode: 'video' to record MP4 files, 'snapshot' to save periodic JPEG images, or 'both' to do both simultaneously. |
--record-path | Set the path where recording video files will be saved. If the value is empty or unavailable, the recorder will not start. | |
--file-duration | 60 | The length (in seconds) of each MP4 recording. |
--jpeg-quality | 30 | Set the quality of the snapshot and thumbnail images in range 0 to 100. |
--peer-timeout | 10 | The connection timeout (in seconds) after receiving a remote offer. |
--hw-accel | false | Enable hardware acceleration by sharing DMA buffers between the decoder, scaler, and encoder to reduce CPU usage. |
--no-adaptive | false | Disable WebRTC's adaptive resolution scaling. When enabled, the output resolution will remain fixed regardless of network or device conditions. |
--enable-ipc | false | Enable IPC relay using a WebRTC DataChannel, lossy (UDP-like) or reliable (TCP-like) based on client preference. |
--ipc-channel | both | IPC channel mode: both, lossy, reliable. |
--stun-url | stun:stun.l.google.com:19302 | Set the STUN server URL for WebRTC. e.g.stun:xxx.xxx.xxx. |
--turn-url | Set the TURN server URL for WebRTC. e.g.turn:xxx.xxx.xxx:3478?transport=tcp. | |
--turn-username | Set the TURN server username for WebRTC authentication. | |
--turn-password | Set the TURN server password for WebRTC authentication. | |
--use-mqtt | false | Use MQTT to exchange sdp and ice candidates. |
--mqtt-host | localhost | Set the MQTT server host. |
--mqtt-port | 1883 | Set the MQTT server port. |
--mqtt-username | Set the MQTT server username. | |
--mqtt-password | Set the MQTT server password. | |
--use-whep | false | Use WHEP (WebRTC-HTTP Egress Protocol) to exchange SDP and ICE candidates. |
--http-port | 8080 | Local HTTP server port to handle signaling when using WHEP. |
--use-websocket | false | Enables the WebSocket client to connect to the SFU server. |
--use-tls | false | Use TLS for the WebSocket connection. Use it when connecting to awss:// URL. |
--ws-host | The WebSocket host address of the SFU server. | |
--ws-room | The room name to join on the SFU server. | |
--ws-key | The API key used to authenticate with the SFU server. |
Note
By default, WebRTC may dynamically reduce the streamingfps,width, orheight based on network or device performance. However, video recording will always use the specified resolution regardless of adaptive adjustments.
There are two ways to read images from the camera. TheV4L2 only supports the v1 and v2 cameras and the HQ camera. After Camera Module 3, future updates will be based onLibcamera.
In general, the USB camera will be detected as a v4l2 device while using default setting.This is for older camera module on Pi OS (beforeBookworm) want to be read via v4l2, please modify thecamera_auto_detect=1 flag in file/boot/firmware/config.txt to
# camera_auto_detect=1camera_auto_detect=0start_x=1gpu_mem=256
Setcamera_auto_detect=0 in order to read CSI camera by V4L2. The size ofgpu_mem depends on desired resolution,256MB for 1080p.
This is the Raspberry Pi officially recommended way to read the camera.
Use the default settingcamera_auto_detect=1 in/boot/firmware/config.txt.In this project, Libcamera only provides theyuv420 format, and the--v4l2-format flags will be disabled.Sinceyuv420 is an uncompressed format, you should check the bandwidth of the CSI/USB interface to ensure the camera can handle high resolution and frame rate images.Each MIPI lane provides 1.5 Gbps bandwidth on the Pi 5, but each MIPI lane provides 1 Gbps on earlier models. [ref]
| Interface | 1-lane MIPI (older Pi) | 2-lane MIPI (Pi 4) | 4-lane MIPI (Pi 5) | USB 2.0 | USB 3.0 |
|---|---|---|---|---|---|
| Bandwith | 1 Gbps | 2 Gbps | 6 Gbps | 0.48 Gbps | 5 Gbps |
For example:
YUV 4:2:0 need 12 bits/pixel. 4Kp60 = 3840 x 2160 x 60 x 12 = 5.56 Gbps.
| Resolution | 4Kp60 | 4Kp30 | 1080p60 | 1080p30 |
|---|---|---|---|---|
| Bandwith | 5.56 Gbps | 2.78 Gbps | 1.39 Gbps | 0.70 Gbps |
The encoder used in WebRTC depends on whether the--hw-accel flag is set and the SDP context offered by the client. I recommend runningv4l2-ctl -d /dev/video0 --list-formats-ext to check which formats your camera supports before specifying the V4L2 source format, allowing you to choose the most suitable encoding mode.
Hardware H264 encoding is only available on the Raspberry Pi 3, 4, and Zero 2. The Raspberry Pi 5 no longer supports the hardware encoder. For other single-board computers, such as Radxa, Odroid, etc., H264 hardware encoding may be supported according to their specifications. However, if their codecs do not implement the V4L2 driver, it is recommended to useSoftware encoding instead. The Raspberry Pi codec device files are located at [ref]:
| Codec | Location |
|---|---|
| decoder | /dev/video10 |
| encoder | /dev/video11 |
| scaler | /dev/video12 |
pi-webrtc only listH264 codecs in theSDP while running with the--hw-accel flag.
For
h264camera source/path/to/pi-webrtc --camera=v4l2:0 --v4l2-format=h264 --fps=30 --width=1280 --height=960 --hw-accel ...
Loadinggraph LRA(Camera) -- h264 --> B(hw decoder) -- yuv420 --> C(hw scaler) --yuv420--> D(hw encoder) --h264-->E(webrtc client)A --h264--> F(mp4)
This command grabs the
h264stream directly from the camera and uses the hardware decoder to convert it toyuv420. If WebRTC detects network or device performance issues, the hardware scaler will automatically adjust by scaling down the decodedyuv420frame resolution, and vice versa when conditions improve. When the resolution changes, the hardware encoder will be reset to match the new resolution. All frame data is transferred via DMA (zero-copy) between hardware codecs. Furthermore, if the--record-pathflag is set (enabling recording), theh264packets from the camera are directly copied into MP4 files.For
mjpegcamera source/path/to/pi-webrtc --camera=v4l2:0 --v4l2-format=mjpeg --fps=30 --width=1280 --height=960 --hw-accel ...
Loadinggraph LRA(camera) -- mjpeg --> B(hw decoder) -- yuv420 --> C(hw scaler) --yuv420--> D(hw encoder) --h264-->E(webrtc client)B --yuv420--> F(openh264) -- h264--> G(mp4)
All processes are similar to the
h264source camera in hardware mode. The main difference isOpenH264software encoder will be used in video recording.For
i420camera source# use v4l2 camera/path/to/pi-webrtc --camera=v4l2:0 --v4l2-format=i420 --fps=30 --width=1280 --height=960 --hw-accel ...# use libcamera/path/to/pi-webrtc --camera=libcamera:0 --fps=30 --width=1280 --height=960 --hw-accel ...
Loadinggraph LRA(camera) -- yuv420 --> C(hw scaler) --yuv420--> D(hw encoder) --h264-->E(webrtc client)A --yuv420--> F(openh264) -- h264--> G(mp4)
This command captures uncompressed
yuv420from the camera. Since it is uncompressed, the CSI/USB bandwidth may not support high resolution and fps.i420is useful when running on a Pi Zero or if CPU usage is excessively high due to many background services. TheOpenH264software encoder will be used for video recording, but this method is typically chosen because of limited system resources, and the recorder is usually not enabled.
pi-webrtc will list allH264,VP8,VP9 andAV1 codecs in theSDP while running without the--hw-accel flag. The encoder used depends on theSDP provided by the client. For example, if the client'sSDP only includes H264, WebRTC will use theH264 encoder for live streaming. Make sure the client'sSDP contains the only codec you want to use."
For
h264camera source/path/to/pi-webrtc --camera=v4l2:0 --v4l2-format=h264 --fps=30 --width=1280 --height=960 ...
It's not available in
pi-webrtc. I didn't implement H264 software decoding.For
mjpegcamera source/path/to/pi-webrtc --camera=v4l2:0 --v4l2-format=mjpeg --fps=30 --width=1280 --height=960 ...
Loadinggraph LRA(camera) -- mjpeg --> B(libyuv) -- yuv420 --> C(libyuv scaler) --yuv420--> D(openh264) --h264-->E(webrtc client)B --yuv420--> F(openh264) -- h264--> G(mp4)
It's suitable for most devices that do not support the V4L2 hardware encoder.The
mjpegframes will be decoded intoyuv420bylibyuv. If WebRTC requires a lower resolution for live streaming, the scaling will also be handled bylibyuv. The recording will use separate instances of theOpenH264encoder.For
i420camera source# use v4l2 camera/path/to/pi-webrtc --camera=v4l2:0 --v4l2-format=i420 --fps=30 --width=1280 --height=960 ...# use libcamera/path/to/pi-webrtc --camera=libcamera:0 --fps=30 --width=1280 --height=960 ...
Loadinggraph LRA(camera) -- yuv420 --> C(libyuv scaler) --yuv420--> D(openh264) --h264-->E(webrtc client)A --yuv420--> F(openh264) -- h264--> G(mp4)
It's suitable for devices do not support the V4L2 hardware encoder, but it provides a very high CSI/USB bandwidth.
pi-webrtc registers itself with theMQTT server at the beginning and waits for theapp client to send a request to initiate the connection. The diagram below shows how the connection process operates between theapp client,MQTT server, andpi-webrtc. Assume that the--uid=home-pi-5, and${mqttId} is another random UID used to identify each MQTT connection.
/path/to/pi-webrtc --camera=libcamera:0 \ --fps=30 \ --width=1280 \ --height=960 \ --use-mqtt \ --mqtt-host=your.mqtt.cloud \ --mqtt-port=8883 \ --mqtt-username=hakunamatata \ --mqtt-password=Wonderful \ --uid=home-pi-5 \ --no-audio
sequenceDiagramNote over pi-webrtc, mqtt server: sub: home-pi-5/sdp/+/offer<br>sub: home-pi-5/ice/+/offerclient--> pi-webrtc: start connectingNote over client, mqtt server: sub: home-pi-5/sdp/${mqttId}<br>sub: home-pi-5/ice/${mqttId}client ->> mqtt server: client's SDPNote over client, mqtt server: pub: home-pi-5/sdp/${mqttId}/offermqtt server ->> pi-webrtc: client's SDPpi-webrtc ->> mqtt server : pi's SDPNote over pi-webrtc, mqtt server: pub: home-pi-5/sdp/${mqttId}mqtt server ->> client: pi's SDPclient ->> mqtt server: client's ICENote over client, mqtt server: pub: home-pi-5/ice/${mqttId}/offermqtt server ->> pi-webrtc: client's ICEpi-webrtc ->> mqtt server : pi's ICENote over pi-webrtc, mqtt server: pub: home-pi-5/ice/${mqttId}mqtt server ->> client: pi's ICEclient ->pi-webrtc: connectedYou can play the video directly using a URL likehttps://your.ddns-to-pi.net byWHEP player without server registration. This allows you to stream video like a traditional RTSP/RTMP stream using a simple URL.Most of web require TLS/SSL, you might neednginx proxy.
/path/to/pi-webrtc --camera=libcamera:0 \ --fps=30 \ --width=1280 \ --height=960 \ --use-whep \ --http-port=8080 \ --uid=home-pi-5 \ --no-audio
sequenceDiagram participant Server as pi-webrtc participant Client as WHEP Player Client->>Server: client's SDP/ICE Note over Client, Server: POST to `https://your.ddns-to-pi.net` Server->>Client: pi's SDP/ICE Note over Client, Server: 201 Created Client->Server: connectedThe websocket is designed to connect with a SFU (Selective Forwarding Unit) server.
The video files will be recorded every minute, and each video file will generate a snapshot image for preview. If the disk space falls below400MB, it will start rotation.
| Format | |
|---|---|
| Video | H264 |
| Audio | AAC |
This will create a 30fps MP4 timelapse video from all.jpg images under a specific folder, including subfolders.
Usefind andsed to recursively list all.jpg files and format them in a way thatffmpeg expects (file '...' per line):
find /mnt/ext_disk/video/20250509/ -type f -iname"*.jpg"| sort| sed"s/^/file '/; s/$/'/"> file_list.txt
This ensures all files are listed in order and quoted properly (required by
ffmpeg -f concat).
ffmpeg -f concat -safe 0 -i file_list.txt -r 30 -c:v libx264 -pix_fmt yuv420p timelapse.mp4
-f concat: Tells ffmpeg to read input from the list file.-safe 0: Allows using absolute paths.-r 30: Sets the output frame rate to 30fps.-c:v libx264: Encodes the video with H.264 codec.-pix_fmt yuv420p: Ensures compatibility with most players (e.g., browsers, mobile devices).
The result will be saved astimelapse.mp4 in the current directory.
| Command | Description |
|---|---|
v4l2-ctl --list-devices | Show available devices in V4L2. |
v4l2-ctl -d /dev/video0 --list-formats-ext | Show supported formats from devices. Not only the camera but also the codecs. |
sudo fdisk -l | List partition tables to help set up USB disks. |
vcgencmd get_camera | Check whether the camera is detected or not. |
Please follow officialReadme.txt to use the latest mosquitto packages.