Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
Tzu Huan Tai edited this pageJun 25, 2025 ·55 revisions

This wiki provides comprehensive information on configuring and using thepi-webrtc for video streaming, along with detailed technical insights on encoders, signaling protocols, and recording options.

Outline

Architecture

Raspberry Pi WebRTC Architecture

Flags

The camera control options are fully compatible with officialrpicam-apps. Just follow theinstructions.

Available flags forpi-webrtc:

OptionDefaultDescription
-h,--helpDisplay the help message.
--cameralibcamera:0Specify the camera using V4L2 or Libcamera. e.g. "libcamera:0" for Libcamera, "v4l2:0" for V4L2 at/dev/video0.
--v4l2-formatmjpegThe input format (i420,yuyv,mjpeg,h264) of the V4L2 camera.
--uidThe unique id to identify the device.
--fps30Specify the camera frames per second.
--width640Set camera frame width.
--height480Set camera frame height.
--rotation0Set the rotation angle of the camera (0, 90, 180, 270).
--sample-rate44100Set the audio sample rate (in Hz).
--no-audiofalseRuns without audio source.
--sharpness1.0Adjust the sharpness of the libcamera output in range 0.0 to 15.99.
--contrast1.0Adjust the contrast of the libcamera output in range 0.0 to 15.99.
--brightness0.0Adjust the brightness of the libcamera output in range -1.0 to 1.0.
--saturation1.0Adjust the saturation of the libcamera output in range 0.0 to 15.99.
--ev0.0Set the EV (exposure value compensation) in range -10.0 to 10.0.
--shutter0Set manual shutter speed in microseconds (0 = auto).
--gain1.0Set manual analog gain (0 = auto).
--meteringcentreMetering mode: centre, spot, average, custom.
--exposurenormalExposure mode: normal, sport, short, long, custom.
--awbautoAWB mode: auto, incandescent, tungsten, fluorescent, indoor, daylight, cloudy, custom.
--awbgains1.0,1.0Custom AWB gains as comma-separated Red, Blue values. e.g. '1.2,1.5'.
--denoiseautoDenoise mode: off, cdn_off, cdn_fast, cdn_hq, auto.
--tuning-fileName of camera tuning file to use, omit this option for libcamera default behaviour.
--autofocus-modedefaultAutofocus mode: default, manual, auto, continuous.
--autofocus-rangenormalAutofocus range: normal, macro, full.
--autofocus-speednormalAutofocus speed: normal, fast.
--autofocus-window0.0,0.0,1.0,1.0Autofocus window as x,y,width,height. e.g. '0.3,0.3,0.4,0.4'.
--lens-positiondefaultSet the lens to a particular focus position, "0" moves the lens to infinity, or "default" for the hyperfocal distance.
--record-modebothRecording mode: 'video' to record MP4 files, 'snapshot' to save periodic JPEG images, or 'both' to do both simultaneously.
--record-pathSet the path where recording video files will be saved. If the value is empty or unavailable, the recorder will not start.
--file-duration60The length (in seconds) of each MP4 recording.
--jpeg-quality30Set the quality of the snapshot and thumbnail images in range 0 to 100.
--peer-timeout10The connection timeout (in seconds) after receiving a remote offer.
--hw-accelfalseEnable hardware acceleration by sharing DMA buffers between the decoder, scaler, and encoder to reduce CPU usage.
--no-adaptivefalseDisable WebRTC's adaptive resolution scaling. When enabled, the output resolution will remain fixed regardless of network or device conditions.
--enable-ipcfalseEnable IPC relay using a WebRTC DataChannel, lossy (UDP-like) or reliable (TCP-like) based on client preference.
--ipc-channelbothIPC channel mode: both, lossy, reliable.
--stun-urlstun:stun.l.google.com:19302Set the STUN server URL for WebRTC. e.g.stun:xxx.xxx.xxx.
--turn-urlSet the TURN server URL for WebRTC. e.g.turn:xxx.xxx.xxx:3478?transport=tcp.
--turn-usernameSet the TURN server username for WebRTC authentication.
--turn-passwordSet the TURN server password for WebRTC authentication.
--use-mqttfalseUse MQTT to exchange sdp and ice candidates.
--mqtt-hostlocalhostSet the MQTT server host.
--mqtt-port1883Set the MQTT server port.
--mqtt-usernameSet the MQTT server username.
--mqtt-passwordSet the MQTT server password.
--use-whepfalseUse WHEP (WebRTC-HTTP Egress Protocol) to exchange SDP and ICE candidates.
--http-port8080Local HTTP server port to handle signaling when using WHEP.
--use-websocketfalseEnables the WebSocket client to connect to the SFU server.
--use-tlsfalseUse TLS for the WebSocket connection. Use it when connecting to awss:// URL.
--ws-hostThe WebSocket host address of the SFU server.
--ws-roomThe room name to join on the SFU server.
--ws-keyThe API key used to authenticate with the SFU server.

Note

By default, WebRTC may dynamically reduce the streamingfps,width, orheight based on network or device performance. However, video recording will always use the specified resolution regardless of adaptive adjustments.

Camera Mode

There are two ways to read images from the camera. TheV4L2 only supports the v1 and v2 cameras and the HQ camera. After Camera Module 3, future updates will be based onLibcamera.

V4L2

In general, the USB camera will be detected as a v4l2 device while using default setting.This is for older camera module on Pi OS (beforeBookworm) want to be read via v4l2, please modify thecamera_auto_detect=1 flag in file/boot/firmware/config.txt to

# camera_auto_detect=1camera_auto_detect=0start_x=1gpu_mem=256

Setcamera_auto_detect=0 in order to read CSI camera by V4L2. The size ofgpu_mem depends on desired resolution,256MB for 1080p.

Libcamera

This is the Raspberry Pi officially recommended way to read the camera.

Use the default settingcamera_auto_detect=1 in/boot/firmware/config.txt.In this project, Libcamera only provides theyuv420 format, and the--v4l2-format flags will be disabled.Sinceyuv420 is an uncompressed format, you should check the bandwidth of the CSI/USB interface to ensure the camera can handle high resolution and frame rate images.Each MIPI lane provides 1.5 Gbps bandwidth on the Pi 5, but each MIPI lane provides 1 Gbps on earlier models. [ref]

Interface1-lane MIPI (older Pi)2-lane MIPI (Pi 4)4-lane MIPI (Pi 5)USB 2.0USB 3.0
Bandwith1 Gbps2 Gbps6 Gbps0.48 Gbps5 Gbps

For example:

YUV 4:2:0 need 12 bits/pixel. 4Kp60 = 3840 x 2160 x 60 x 12 = 5.56 Gbps.

Resolution4Kp604Kp301080p601080p30
Bandwith5.56 Gbps2.78 Gbps1.39 Gbps0.70 Gbps

Encoding Mode

The encoder used in WebRTC depends on whether the--hw-accel flag is set and the SDP context offered by the client. I recommend runningv4l2-ctl -d /dev/video0 --list-formats-ext to check which formats your camera supports before specifying the V4L2 source format, allowing you to choose the most suitable encoding mode.

Hardware H264 encoding is only available on the Raspberry Pi 3, 4, and Zero 2. The Raspberry Pi 5 no longer supports the hardware encoder. For other single-board computers, such as Radxa, Odroid, etc., H264 hardware encoding may be supported according to their specifications. However, if their codecs do not implement the V4L2 driver, it is recommended to useSoftware encoding instead. The Raspberry Pi codec device files are located at [ref]:

CodecLocation
decoder/dev/video10
encoder/dev/video11
scaler/dev/video12

Hardware (V4L2 H264)

pi-webrtc only listH264 codecs in theSDP while running with the--hw-accel flag.

  • Forh264 camera source

    /path/to/pi-webrtc --camera=v4l2:0 --v4l2-format=h264 --fps=30 --width=1280 --height=960 --hw-accel ...
    graph LRA(Camera) -- h264 --> B(hw decoder) -- yuv420 --> C(hw scaler) --yuv420--> D(hw encoder) --h264-->E(webrtc client)A --h264--> F(mp4)
    Loading

    This command grabs theh264 stream directly from the camera and uses the hardware decoder to convert it toyuv420. If WebRTC detects network or device performance issues, the hardware scaler will automatically adjust by scaling down the decodedyuv420 frame resolution, and vice versa when conditions improve. When the resolution changes, the hardware encoder will be reset to match the new resolution. All frame data is transferred via DMA (zero-copy) between hardware codecs. Furthermore, if the--record-path flag is set (enabling recording), theh264 packets from the camera are directly copied into MP4 files.

  • Formjpeg camera source

    /path/to/pi-webrtc --camera=v4l2:0 --v4l2-format=mjpeg --fps=30 --width=1280 --height=960 --hw-accel ...
    graph LRA(camera) -- mjpeg --> B(hw decoder) -- yuv420 --> C(hw scaler) --yuv420--> D(hw encoder) --h264-->E(webrtc client)B --yuv420--> F(openh264) -- h264--> G(mp4)
    Loading

    All processes are similar to theh264 source camera in hardware mode. The main difference isOpenH264 software encoder will be used in video recording.

  • Fori420 camera source

    # use v4l2 camera/path/to/pi-webrtc --camera=v4l2:0 --v4l2-format=i420 --fps=30 --width=1280 --height=960 --hw-accel ...# use libcamera/path/to/pi-webrtc --camera=libcamera:0 --fps=30 --width=1280 --height=960 --hw-accel ...
    graph LRA(camera) -- yuv420 --> C(hw scaler) --yuv420--> D(hw encoder) --h264-->E(webrtc client)A --yuv420--> F(openh264) -- h264--> G(mp4)
    Loading

    This command captures uncompressedyuv420 from the camera. Since it is uncompressed, the CSI/USB bandwidth may not support high resolution and fps.i420 is useful when running on a Pi Zero or if CPU usage is excessively high due to many background services. TheOpenH264 software encoder will be used for video recording, but this method is typically chosen because of limited system resources, and the recorder is usually not enabled.

Software (H264/VP8/VP9/AV1)

pi-webrtc will list allH264,VP8,VP9 andAV1 codecs in theSDP while running without the--hw-accel flag. The encoder used depends on theSDP provided by the client. For example, if the client'sSDP only includes H264, WebRTC will use theH264 encoder for live streaming. Make sure the client'sSDP contains the only codec you want to use."

  • Forh264 camera source

    /path/to/pi-webrtc --camera=v4l2:0 --v4l2-format=h264 --fps=30 --width=1280 --height=960 ...

    It's not available inpi-webrtc. I didn't implement H264 software decoding.

  • Formjpeg camera source

    /path/to/pi-webrtc --camera=v4l2:0 --v4l2-format=mjpeg --fps=30 --width=1280 --height=960 ...
    graph LRA(camera) -- mjpeg --> B(libyuv) -- yuv420 --> C(libyuv scaler) --yuv420--> D(openh264) --h264-->E(webrtc client)B --yuv420--> F(openh264) -- h264--> G(mp4)
    Loading

    It's suitable for most devices that do not support the V4L2 hardware encoder.Themjpeg frames will be decoded intoyuv420 bylibyuv. If WebRTC requires a lower resolution for live streaming, the scaling will also be handled bylibyuv. The recording will use separate instances of theOpenH264 encoder.

  • Fori420 camera source

    # use v4l2 camera/path/to/pi-webrtc --camera=v4l2:0 --v4l2-format=i420 --fps=30 --width=1280 --height=960  ...# use libcamera/path/to/pi-webrtc --camera=libcamera:0 --fps=30 --width=1280 --height=960 ...
    graph LRA(camera) -- yuv420 --> C(libyuv scaler) --yuv420--> D(openh264) --h264-->E(webrtc client)A --yuv420--> F(openh264) -- h264--> G(mp4)
    Loading

    It's suitable for devices do not support the V4L2 hardware encoder, but it provides a very high CSI/USB bandwidth.

Signaling

MQTT

rpi-mqtt

pi-webrtc registers itself with theMQTT server at the beginning and waits for theapp client to send a request to initiate the connection. The diagram below shows how the connection process operates between theapp client,MQTT server, andpi-webrtc. Assume that the--uid=home-pi-5, and${mqttId} is another random UID used to identify each MQTT connection.

/path/to/pi-webrtc --camera=libcamera:0 \  --fps=30 \  --width=1280 \  --height=960 \  --use-mqtt \  --mqtt-host=your.mqtt.cloud \  --mqtt-port=8883 \  --mqtt-username=hakunamatata \  --mqtt-password=Wonderful \  --uid=home-pi-5 \  --no-audio
sequenceDiagramNote over pi-webrtc, mqtt server: sub: home-pi-5/sdp/+/offer<br>sub: home-pi-5/ice/+/offerclient--> pi-webrtc: start connectingNote over client, mqtt server: sub: home-pi-5/sdp/${mqttId}<br>sub: home-pi-5/ice/${mqttId}client ->> mqtt server: client's SDPNote over client, mqtt server: pub: home-pi-5/sdp/${mqttId}/offermqtt server ->> pi-webrtc: client's SDPpi-webrtc ->> mqtt server : pi's SDPNote over pi-webrtc, mqtt server: pub: home-pi-5/sdp/${mqttId}mqtt server ->> client: pi's SDPclient ->> mqtt server: client's ICENote over client, mqtt server: pub: home-pi-5/ice/${mqttId}/offermqtt server ->> pi-webrtc: client's ICEpi-webrtc ->> mqtt server : pi's ICENote over pi-webrtc, mqtt server: pub: home-pi-5/ice/${mqttId}mqtt server ->> client: pi's ICEclient ->pi-webrtc: connected
Loading

WHEP

rpi-whep

You can play the video directly using a URL likehttps://your.ddns-to-pi.net byWHEP player without server registration. This allows you to stream video like a traditional RTSP/RTMP stream using a simple URL.Most of web require TLS/SSL, you might neednginx proxy.

/path/to/pi-webrtc --camera=libcamera:0 \  --fps=30 \  --width=1280 \  --height=960 \  --use-whep \  --http-port=8080 \  --uid=home-pi-5 \  --no-audio
sequenceDiagram    participant Server as pi-webrtc    participant Client as WHEP Player    Client->>Server: client's SDP/ICE    Note over Client, Server: POST to `https://your.ddns-to-pi.net`    Server->>Client: pi's SDP/ICE    Note over Client, Server: 201 Created    Client->Server: connected
Loading

WebSocket

rpi-sfu

The websocket is designed to connect with a SFU (Selective Forwarding Unit) server.

Recording

The video files will be recorded every minute, and each video file will generate a snapshot image for preview. If the disk space falls below400MB, it will start rotation.

Format
VideoH264
AudioAAC

Misc

Make Timelapse Video from .jpg Files

This will create a 30fps MP4 timelapse video from all.jpg images under a specific folder, including subfolders.

1. Generate the file list

Usefind andsed to recursively list all.jpg files and format them in a way thatffmpeg expects (file '...' per line):

find /mnt/ext_disk/video/20250509/ -type f -iname"*.jpg"| sort| sed"s/^/file '/; s/$/'/"> file_list.txt

This ensures all files are listed in order and quoted properly (required byffmpeg -f concat).


2. Create the timelapse video with ffmpeg

ffmpeg -f concat -safe 0 -i file_list.txt -r 30 -c:v libx264 -pix_fmt yuv420p timelapse.mp4
  • -f concat: Tells ffmpeg to read input from the list file.
  • -safe 0: Allows using absolute paths.
  • -r 30: Sets the output frame rate to 30fps.
  • -c:v libx264: Encodes the video with H.264 codec.
  • -pix_fmt yuv420p: Ensures compatibility with most players (e.g., browsers, mobile devices).

The result will be saved astimelapse.mp4 in the current directory.

Useful Commands

CommandDescription
v4l2-ctl --list-devicesShow available devices in V4L2.
v4l2-ctl -d /dev/video0 --list-formats-extShow supported formats from devices. Not only the camera but also the codecs.
sudo fdisk -lList partition tables to help set up USB disks.
vcgencmd get_cameraCheck whether the camera is detected or not.

Use the Eclipse Mosquitto debian repository

Please follow officialReadme.txt to use the latest mosquitto packages.


[8]ページ先頭

©2009-2025 Movatter.jp