Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Advanced Settings

Tzu Huan Tai edited this pageJun 23, 2025 ·36 revisions

Outline

Broadcasting a Live Stream to 1,000+ Viewers via SFU

SFU Cloud Service

Use a WebSocket connection to the SFU server to let your Raspberry Pi broadcast a video stream to up to1,000+ concurrent viewers, with better scalability.

Free Testing Servers

HostAPI Key
free1-api.picamera.liveAPIz3LVTsM2bmNi
free2-api.picamera.liveAPImt9NH3jmLvVW
free3-api.picamera.liveAPIYaoxAY73uY46

⚠️ Each testing server supports up to 100 concurrent connections, with a monthly limit of 5,000 minutes and 50 GB total data transfer shared by all users.For a dedicated environment, contact:tzu.huan.tai@gmail.com.

1. Run on Raspberry Pi

/path/to/pi-webrtc --camera=libcamera:0 \    --fps=30 \    --width=1920 \    --height=1080 \    --uid=your-display-name \    --use-websocket \    --use-tls \    --ws-host=free1-api.picamera.live \    --ws-key=APIz3LVTsM2bmNi \    --ws-room=the-room-name

--uid: unique user ID for the publisher (e.g. "camera123")
--ws-room: shared room name for publisher/viewers

2. View the Stream, anyone can join the room via:

Using the V4L2 Driver

  1. Modify/boot/firmware/config.txt:
    # camera_auto_detect=1  # Default settingcamera_auto_detect=0# Turn off default libcamerastart_x=1# Includes additional codecsgpu_mem=128# Adjust based on resolution (256MB for 1080p). The option is not valid for bookworm.

Tip

Not sure whether to use V4L2 or Libcamera?
V4L2 is typically used with older or generic USB cameras that don’t require special drivers. Most USB cameras will be detected as V4L2 devices by default. In contrast, Camera Module v3 only works with Libcamera. If you're unsure, start with Libcamera.

  1. Run command with--camera=v4l2:0 for V4L2 camera at/dev/video0,e.g.
    ./pi-webrtc --camera=v4l2:0 \    --uid=your-custom-uid \    --v4l2-format=mjpeg \    --fps=30 \    --width=1280 \    --height=960 \    --hw-accel \    --no-audio \    --mqtt-host=your.mqtt.cloud \    --mqtt-port=8883 \    --mqtt-username=hakunamatata \    --mqtt-password=Wonderful

Caution

When setting 1920x1080 with the legacy V4L2 driver, the hardware decoder firmware may adjust it to 1920x1088, while the ISP/encoder remains at 1920x1080 on the 6.6.31 kernel. This may cause memory out-of-range issues. Setting 1920x1088 resolves this issue.

Running as a Linux Service

1. Runpulseaudio as system-wide daemon[ref]:

  • Installpulseaudio
    sudo apt install pulseaudio
  • Create a service file/etc/systemd/system/pulseaudio.service
    sudo nano /etc/systemd/system/pulseaudio.service
  • Copy the following content:
    [Unit]Description= Pulseaudio DaemonAfter=rtkit-daemon.service systemd-udevd.service dbus.service[Service]Type=simpleExecStart=/usr/bin/pulseaudio --system --disallow-exit --disallow-module-loadingRestart=alwaysRestartSec=10[Install]WantedBy=multi-user.target
  • Run the command to add aautospawn = no in the client config
    echo'autospawn = no'| sudo tee -a /etc/pulse/client.conf> /dev/null
  • Add root to the pulse group, enable the service and reboot
    sudo adduser root pulse-accesssudo systemctlenable pulseaudio.servicesudo reboot

2. In order to runpi-webrtc and ensure it starts automatically on reboot:

  • Create a service file/etc/systemd/system/pi-webrtc.service
    sudo nano /etc/systemd/system/pi-webrtc.service
  • ModifyWorkingDirectory andExecStart to your settings:
    [Unit]Description= The p2p camera via webrtc.After=network-online.target pulseaudio.service[Service]Type=simpleWorkingDirectory=/path/toExecStart=/path/to/pi-webrtc --camera=libcamera:0 --fps=30 --width=1280 --height=960 --uid=your-uid --hw-accel --mqtt-host=example.s1.eu.hivemq.cloud --mqtt-port=8883 --mqtt-username=hakunamatata --mqtt-password=wonderfulRestart=alwaysRestartSec=10[Install]WantedBy=multi-user.target
  • Enable and start the Service
    sudo systemctl daemon-reloadsudo systemctlenable pi-webrtc.servicesudo systemctl start pi-webrtc.service

Recording

Using a USB Drive for Recording

  1. Identify your USB drive path (e.g.,/dev/sda1):

    sudo fdisk -l
  2. Automatically mount the USB drive to/mnt/ext_disk usingautofs:
    Reference: Gentoo Wiki

    sudo apt-get install autofsecho'/- /etc/auto.usb --timeout=5'| sudo tee -a /etc/auto.master> /dev/nullecho'/mnt/ext_disk -fstype=auto,nofail,nodev,nosuid,noatime,async,umask=000 :/dev/sda1'| sudo tee -a /etc/auto.usb> /dev/nullsudo systemctl restart autofs
  3. Run the application with--record-path pointing to the mounted directory:

    /path/to/pi-webrtc ... --record-path=/mnt/ext_disk/video

Using Virtual Disk File for Recording (No USB Required)

If you don't have a USB drive, you can allocate a portion of disk space as a virtual drive using a disk image file:

  1. Create a 16GB image file (you can adjust the size if needed):

    dd if=/dev/zero of=/home/pi/16gb.img bs=1M count=16384
  2. Format the image file as ext4:

    mkfs.ext4 /home/pi/16gb.img
  3. Mount the image file to a local folder:

    mkdir -p /home/pi/limited_foldersudo mount -o loop /home/pi/16gb.img /home/pi/limited_folder
  4. Set write permissions:

    sudo chmod 777 /home/pi/limited_folder
  5. Run the application with--record-path pointing to the mounted folder:

    /path/to/pi-webrtc ... --record-path=/home/pi/limited_folder/
  6. (Optional) Auto-mount the image at boot by editing/etc/fstab:

    sudo nano /etc/fstab

    Add the following line at the end of the file:

    /home/pi/16gb.img  /home/pi/limited_folder  ext4  loop  0  0

Caution

Be careful when editing/etc/fstab. A mistake could prevent your system from booting properly. Always test the mount manually first.

Two-way Audio Communication

Please runpulseaudio in background and remove the--no-audio flag.

a microphone and speaker need to be added to the Pi. It's easier to plug in a USB mic/speaker. If you want to use GPIO, please follow the link below.

Microphone

Please see thislink for instructions on wiring and testing your Pi.

Speaker

You can use thelink for instructions on setting up a speaker on your Pi.

Two-way DataChannel Messaging

This architecture supports for AI event notifications, sensor inputs, or remote control commands between browser and Raspberry Pi. It works with both--use-mqtt and--use-websocket, and you can configure the--ipc-channel mode as eitherlossy orreliable.

Note

When using--use-websocket to connect through the SFU server, messages will be broadcast to all participants in the same room.image

  • Startpi-webrtc with the--enable-ipc flag to enable IPC relay over the DataChannel:

    /path/to/pi-webrtc --camera=libcamera:0 --fps=30 ... --enable-ipc
  • Run theunix_socket_client.py example on Pi to:

    python ./examples/unix_socket_client.py
    • Logs all messages sent/received viapi-webrtc.
    • The script continuously sends "ping from client" messages through the Unix socket.
  • On the client side, picamera.js handles messaging via:

    • onMessage() – receive messages from the Pi
    • sendMessage() – send messages to the Pi

    See:Send message for IPC via DataChannel, or try it onpicamera-web

Stream AI or Any Custom Feed to a Virtual Camera

Sometimes we need to enhance images, run AI recognition, or apply preprocessing before streaming the video to the client. In such cases, virtual cameras are extremely useful.

The idea is to read the raw video stream from /dev/video0 (or another input), process the frames, and output the result to aV4L2 loopback device such as/dev/videoX.

  1. Install required packages

    sudo apt install v4l2loopback-dkms libopencv-dev python3-opencv python3-picamera2 ffmpeg
  2. Create a virtual v4l2 device on/dev/video8

    sudo modprobe v4l2loopback devices=1 video_nr=8 card_label=ProcessedCam max_buffers=4 exclusive_caps=1
  3. Create python virtual env,

    python -m venv --system-site-packages~/venv
  4. Actveate env and install required packages

    source~/venv/bin/activatepip install --upgrade pippip install wheel pip install rpi-libcamera picamera2 opencv-python
  5. Run the virtual camera

    Use Libcamera to output a YUV420 (I420) format stream to the virtual device.See thevirtual_cam.py example, which creates/dev/video16.

    python virtual_cam.py --width 1280 --height 720 --camera-id 0 --virtual-device /dev/video8
  • Runpi-webrtc

    Read from the virtual device/dev/video16 with the correct format:

    /path/to/pi-webrtc --camera=v4l2:8 --fps=30 --width=1280 --height=720 --v4l2-format=i420 ...

Tip

Need to stream the same camera source to multiple clients with different pi-webrtc instances?
You can create multiple virtual cameras from the same processed source and stream each one independently. See theyolo_cam.py example to output multiple V4L2 loopback devices.

WHEP with Nginx proxy

Browsers require WebRTC connections to be built only when the website is served overhttps, sopi-webrtc also needs to be served overhttps. Below is annginx.conf example usingDDNS andLet's Encrypt, assuming yourpi-webrtc is running with the--http-port=8080 flag and the hostname isexample.ddns.net.

⚠️ Don't forget to set up port forwarding to map public port 443 to your Raspberry Pi.

  • Examplenginx.config:

    http{gzip on;sendfile on;tcp_nopush on;types_hash_max_size2048;include /etc/nginx/mime.types;default_type application/octet-stream;ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;access_log /var/log/nginx/access.log;server{listen *:443ssl;listen [::]:443ssl;server_name example.ddns.net;ssl_certificate /etc/letsencrypt/live/example.ddns.net/fullchain.pem;ssl_certificate_key /etc/letsencrypt/live/example.ddns.net/privkey.pem;location /{proxy_passhttp://127.0.0.1:8080;proxy_http_version 1.1;proxy_set_header Host$host;proxy_set_header X-Real-IP$remote_addr;proxy_set_header X-Forwarded-For$proxy_add_x_forwarded_for;proxy_set_header X-Forwarded-Proto$scheme;}}}
  • Run program

    /path/to/pi-webrtc --camera=libcamera:0 \    --uid=home-pi-5 \    --fps=30 \    --width=2560 \    --height=1440 \    --use-whep \    --http-port=8080 \    --no-audio
  • Play the stream via yourhttps://example.ddns.net/ url indemo whep player

    image

Using the WebRTC Camera in Home Assistant

This guide will walk you through setting upWebRTC Camera in Home Assistant and streaming live video usingpi-webrtc on a Raspberry Pi.

1. Prepare the Environment

2. Install HACS (Home Assistant Community Store)

  • HACS allows you to install community-developed integrations like WebRTC Camera.

  • Follow the official HACS installation guide:How to Install HACS

    Screenshot 2025-02-03 043025

3. Install WebRTC Camera via HACS

  • Go toHome AssistantHACSIntegrations → Search forWebRTC Camera.

  • Restart Home Assistant after installation.

    Screenshot 2025-02-03 043256

4. Integrate WebRTC Camera in Home Assistant

  • Go toSettingsDevices & Services → clickAdd Integration.

    Screenshot 2025-02-03 045243

5. Runpi-webrtc on Raspberry Pi

  • Use HTTP signalingpi-webrtc and execute the following command to start streaming video:
    /path/to/pi-webrtc --camera=libcamera:0 \    --uid=home-pi-4b \    --fps=30 \    --width=1280 \    --height=720 \    --use-whep \    --http-port=8080
  • The output URL will be exposed on port8080, e.g.,http://192.168.4.35:8080.

6. Display WebRTC Video in Home Assistant Dashboard

  • Go toDashboard → ClickEdit DashboardAdd Card → SelectWebRTC Camera

    Screenshot 2025-02-03 043403

  • Enter the URL in the configuration and save:

    type:custom:webrtc-cameraurl:webrtc:http://192.168.4.35:8080

    Screenshot 2025-02-03 043600

Clone this wiki locally


[8]ページ先頭

©2009-2025 Movatter.jp