Ultimate camera streaming application with support RTSP, WebRTC, HomeKit, FFmpeg, RTMP, etc.
zero-dependency and zero-config small app for all OS (Windows, macOS, Linux, ARM)
zero-delay for many supported protocols (lowest possible streaming latency)
streaming from RTSP, RTMP, DVRIP, HTTP (FLV/MJPEG/JPEG/TS), USB Cameras and other sources
streaming from any sources, supported by FFmpeg
streaming to RTSP, WebRTC, MSE/MP4, HomeKit HLS or MJPEG
publish any source to popular streaming services (YouTube, Telegram, etc.)
first project in the World with support streaming from HomeKit Cameras
support H265 for WebRTC in browser (Safari only, read more)
on the fly transcoding for unsupported codecs via FFmpeg
play audio files and live streams on some cameras with speaker
multi-source 2-way codecs negotiation
mixing tracks from different sources to single stream
auto match client supported codecs
2-way audio for some cameras
streaming from private networks via ngrok
can be integrated to any smart home platform or be used as standalone app
Inspired by:
series of streaming projects from @deepch
webrtc go library and whole @pion team
rtsp-simple-server idea from @aler9
GStreamer framework pipeline idea
MediaSoup framework routing idea
HomeKit Accessory Protocol from @brutella
creator of the project's logo @v_novoseltsev
Fast start
go2rtc: Binary
go2rtc: Docker
go2rtc: Home Assistant Add-on
go2rtc: Home Assistant Integration
go2rtc: Dev version
Configuration
Two way audio
Source: RTSP
Source: RTMP
Source: HTTP
Source: ONVIF
Source: FFmpeg
Source: FFmpeg Device
Source: Exec
Source: Echo
Source: Expr
Source: HomeKit
Source: Bubble
Source: DVRIP
Source: Tapo
Source: Kasa
Source: GoPro
Source: Ivideon
Source: Hass
Source: ISAPI
Source: Nest
Source: Roborock
Source: WebRTC
Source: WebTorrent
Incoming sources
Stream to camera
Publish stream
Module: Streams
Module: API
Module: RTSP
Module: RTMP
Module: WebRTC
Module: HomeKit
Module: WebTorrent
Module: ngrok
Module: Hass
Module: MP4
Module: HLS
Module: MJPEG
Module: Log
Security
Codecs filters
Codecs madness
Codecs negotiation
Projects using go2rtc
Camera experience
TIPS
FAQ
Download binary or use Docker or Home Assistant Add-on or Integration
Open web interface: http://localhost:1984/
Optionally:
add your streams to config file
setup external access to webrtc
Developers:
write your own web interface
integrate web api into your smart home platform
Download binary for your OS from latest release:
go2rtc_win64.zip
- Windows 10+ 64-bit
go2rtc_win32.zip
- Windows 7+ 32-bit
go2rtc_win_arm64.zip
- Windows ARM 64-bit
go2rtc_linux_amd64
- Linux 64-bit
go2rtc_linux_i386
- Linux 32-bit
go2rtc_linux_arm64
- Linux ARM 64-bit (ex. Raspberry 64-bit OS)
go2rtc_linux_arm
- Linux ARM 32-bit (ex. Raspberry 32-bit OS)
go2rtc_linux_armv6
- Linux ARMv6 (for old Raspberry 1 and Zero)
go2rtc_linux_mipsel
- Linux MIPS (ex. Xiaomi Gateway 3, Wyze cameras)
go2rtc_mac_amd64.zip
- macOS 10.13+ Intel 64-bit
go2rtc_mac_arm64.zip
- macOS ARM 64-bit
Don't forget to fix the rights chmod +x go2rtc_xxx_xxx
on Linux and Mac.
The Docker container alexxit/go2rtc
supports multiple architectures including amd64
, 386
, arm64
, and arm
. This container offers the same functionality as the Home Assistant Add-on but is designed to operate independently of Home Assistant. It comes preinstalled with FFmpeg, ngrok, and Python.
Install Add-On:
Settings > Add-ons > Plus > Repositories > Add https://github.com/AlexxIT/hassio-addons
go2rtc > Install > Start
Setup Integration
WebRTC Camera custom component can be used on any Home Assistant installation, including HassWP on Windows. It can automatically download and use the latest version of go2rtc. Or it can connect to an existing version of go2rtc. Addon installation in this case is optional.
Latest, but maybe unstable version:
Binary: latest nightly release
Docker: alexxit/go2rtc:master
or alexxit/go2rtc:master-hardware
versions
Hass Add-on: go2rtc master
or go2rtc master hardware
versions
by default go2rtc will search go2rtc.yaml
in the current work directory
api
server will start on default 1984 port (TCP)
rtsp
server will start on default 8554 port (TCP)
webrtc
will use port 8555 (TCP/UDP) for connections
ffmpeg
will use default transcoding options
Configuration options and a complete list of settings can be found in the wiki.
Available modules:
streams
api - HTTP API (important for WebRTC support)
rtsp - RTSP Server (important for FFmpeg support)
webrtc - WebRTC Server
mp4 - MSE, MP4 stream and MP4 snapshot Server
hls - HLS TS or fMP4 stream Server
mjpeg - MJPEG Server
ffmpeg - FFmpeg integration
ngrok - ngrok integration (external access for private network)
hass - Home Assistant integration
log - logs config
go2rtc support different stream source types. You can config one or multiple links of any type as stream source.
Available source types:
rtsp - RTSP
and RTSPS
cameras with two way audio support
rtmp - RTMP
streams
http - HTTP-FLV
, MPEG-TS
, JPEG
(snapshots), MJPEG
streams
onvif - get camera RTSP
link and snapshot link using ONVIF
protocol
ffmpeg - FFmpeg integration (HLS
, files
and many others)
ffmpeg:device - local USB Camera or Webcam
exec - get media from external app output
echo - get stream link from bash or python
expr - get stream link via built-in expression language
homekit - streaming from HomeKit Camera
bubble - streaming from ESeeCloud/dvr163 NVR
dvrip - streaming from DVR-IP NVR
tapo - TP-Link Tapo cameras with two way audio support
kasa - TP-Link Kasa cameras
gopro - GoPro cameras
ivideon - public cameras from Ivideon service
hass - Home Assistant integration
isapi - two way audio for Hikvision (ISAPI) cameras
roborock - Roborock vacuums with cameras
webrtc - WebRTC/WHEP sources
webtorrent - WebTorrent source from another go2rtc
Read more about incoming sources
Supported for sources:
RTSP cameras with ONVIF Profile T (back channel connection)
DVRIP cameras
TP-Link Tapo cameras
Hikvision ISAPI cameras
Roborock vacuums models with cameras
Exec audio on server
Any Browser as IP-camera
Two way audio can be used in browser with WebRTC technology. The browser will give access to the microphone only for HTTPS sites (read more).
go2rtc also support play audio files and live streams on this cameras.
streams: sonoff_camera: rtsp://rtsp:[email protected]/av_stream/ch0 dahua_camera: - rtsp://admin:[email protected]/cam/realmonitor?channel=1&subtype=0&unicast=true&proto=Onvif- rtsp://admin:[email protected]/cam/realmonitor?channel=1&subtype=1 amcrest_doorbell: - rtsp://username:[email protected]:554/cam/realmonitor?channel=1&subtype=0#backchannel=0 unifi_camera: rtspx://192.168.1.123:7441/fD6ouM72bWoFijxK glichy_camera: ffmpeg:rtsp://username:[email protected]/live/ch00_1
Recommendations
Amcrest Doorbell users may want to disable two way audio, because with an active stream you won't have a call button working. You need to add #backchannel=0
to the end of your RTSP link in YAML config file
Dahua Doorbell users may want to change backchannel audio codec
Reolink users may want NOT to use RTSP protocol at all, some camera models have a very awful unusable stream implementation
Ubiquiti UniFi users may want to disable HTTPS verification. Use rtspx://
prefix instead of rtsps://
. And don't use ?enableSrtp
suffix
TP-Link Tapo users may skip login and password, because go2rtc support login without them
If your camera has two RTSP links - you can add both of them as sources. This is useful when streams has different codecs, as example AAC audio with main stream and PCMU/PCMA audio with second stream
If the stream from your camera is glitchy, try using ffmpeg source. It will not add CPU load if you won't use transcoding
If the stream from your camera is very glitchy, try to use transcoding with ffmpeg source
Other options
Format: rtsp...#{param1}#{param2}#{param3}
Add custom timeout #timeout=30
(in seconds)
Ignore audio - #media=video
or ignore video - #media=audio
Ignore two way audio API #backchannel=0
- important for some glitchy cameras
Use WebSocket transport #transport=ws...
RTSP over WebSocket
streams: # WebSocket with authorization, RTSP - without axis-rtsp-ws: rtsp://192.168.1.123:4567/axis-media/media.amp?overview=0&camera=1&resolution=1280x720&videoframeskipmode=empty&Axis-Orig-Sw=true#transport=ws://user:[email protected]:4567/rtsp-over-websocket # WebSocket without authorization, RTSP - with dahua-rtsp-ws: rtsp://user:[email protected]/cam/realmonitor?channel=1&subtype=1&proto=Private3#transport=ws://192.168.1.123/rtspoverwebsocket
You can get stream from RTMP server, for example Nginx with nginx-rtmp-module.
streams: rtmp_stream: rtmp://192.168.1.123/live/camera1
Support Content-Type:
HTTP-FLV (video/x-flv
) - same as RTMP, but over HTTP
HTTP-JPEG (image/jpeg
) - camera snapshot link, can be converted by go2rtc to MJPEG stream
HTTP-MJPEG (multipart/x
) - simple MJPEG stream over HTTP
MPEG-TS (video/mpeg
) - legacy streaming format
Source also support HTTP and TCP streams with autodetection for different formats: MJPEG, H.264/H.265 bitstream, MPEG-TS.
streams: # [HTTP-FLV] stream in video/x-flv format http_flv: http://192.168.1.123:20880/api/camera/stream/780900131155/657617 # [JPEG] snapshots from Dahua camera, will be converted to MJPEG stream dahua_snap: http://admin:[email protected]/cgi-bin/snapshot.cgi?channel=1 # [MJPEG] stream will be proxied without modification http_mjpeg: https://mjpeg.sanford.io/count.mjpeg # [MJPEG or H.264/H.265 bitstream or MPEG-TS] tcp_magic: tcp://192.168.1.123:12345 # Add custom header custom_header: "https://mjpeg.sanford.io/count.mjpeg#header=Authorization: Bearer XXX"
PS. Dahua camera has bug: if you select MJPEG codec for RTSP second stream - snapshot won't work.
New in v1.5.0
The source is not very useful if you already know RTSP and snapshot links for your camera. But it can be useful if you don't.
WebUI > Add webpage support ONVIF autodiscovery. Your server must be on the same subnet as the camera. If you use docker, you must use "network host".
streams: dahua1: onvif://admin:[email protected] reolink1: onvif://admin:[email protected]:8000 tapo1: onvif://admin:[email protected]:2020
You can get any stream or file or device via FFmpeg and push it to go2rtc. The app will automatically start FFmpeg with the proper arguments when someone starts watching the stream.
FFmpeg preistalled for Docker and Hass Add-on users
Hass Add-on users can target files from /media folder
Format: ffmpeg:{input}#{param1}#{param2}#{param3}
. Examples:
streams: # [FILE] all tracks will be copied without transcoding codecs file1: ffmpeg:/media/BigBuckBunny.mp4 # [FILE] video will be transcoded to H264, audio will be skipped file2: ffmpeg:/media/BigBuckBunny.mp4#video=h264 # [FILE] video will be copied, audio will be transcoded to pcmu file3: ffmpeg:/media/BigBuckBunny.mp4#video=copy#audio=pcmu # [HLS] video will be copied, audio will be skipped hls: ffmpeg:https://devstreaming-cdn.apple.com/videos/streaming/examples/bipbop_16x9/gear5/prog_index.m3u8#video=copy # [MJPEG] video will be transcoded to H264 mjpeg: ffmpeg:http://185.97.122.128/cgi-bin/faststream.jpg#video=h264 # [RTSP] video with rotation, should be transcoded, so select H264 rotate: ffmpeg:rtsp://rtsp:[email protected]/av_stream/ch0#video=h264#rotate=90
All trascoding formats has built-in templates: h264
, h265
, opus
, pcmu
, pcmu/16000
, pcmu/48000
, pcma
, pcma/16000
, pcma/48000
, aac
, aac/16000
.
But you can override them via YAML config. You can also add your own formats to config and use them with source params.
ffmpeg: bin: ffmpeg # path to ffmpeg binary h264: "-codec:v libx264 -g:v 30 -preset:v superfast -tune:v zerolatency -profile:v main -level:v 4.1" mycodec: "-any args that supported by ffmpeg..." myinput: "-fflags nobuffer -flags low_delay -timeout 5000000 -i {input}" myraw: "-ss 00:00:20"
You can use go2rtc stream name as ffmpeg input (ex. ffmpeg:camera1#video=h264
)
You can use video
and audio
params multiple times (ex. #video=copy#audio=copy#audio=pcmu
)
You can use rotate
param with 90
, 180
, 270
or -90
values, important with transcoding (ex. #video=h264#rotate=90
)
You can use width
and/or height
params, important with transcoding (ex. #video=h264#width=1280
)
You can use drawtext
to add a timestamp (ex. drawtext=x=2:y=2:fontsize=12:fontcolor=white:box=1:boxcolor=black
)
This will greatly increase the CPU of the server, even with hardware acceleration
You can use raw
param for any additional FFmpeg arguments (ex. #raw=-vf transpose=1
)
You can use input
param to override default input template (ex. #input=rtsp/udp
will change RTSP transport from TCP to UDP+TCP)
You can use raw input value (ex. #input=-timeout 5000000 -i {input}
)
You can add your own input templates
Read more about hardware acceleration.
PS. It is recommended to check the available hardware in the WebUI add page.
You can get video from any USB-camera or Webcam as RTSP or WebRTC stream. This is part of FFmpeg integration.
check available devices in Web interface
video_size
and framerate
must be supported by your camera!
for Linux supported only video for now
for macOS you can stream Facetime camera or whole Desktop!
for macOS important to set right framerate
Format: ffmpeg:device?{input-params}#{param1}#{param2}#{param3}
streams: linux_usbcam: ffmpeg:device?video=0&video_size=1280x720#video=h264 windows_webcam: ffmpeg:device?video=0#video=h264 macos_facetime: ffmpeg:device?video=0&audio=1&video_size=1280x720&framerate=30#video=h264#audio=pcma
PS. It is recommended to check the available devices in the WebUI add page.
Exec source can run any external application and expect data from it. Two transports are supported - pipe (from v1.5.0) and RTSP.
If you want to use RTSP transport - the command must contain the {output}
argument in any place. On launch, it will be replaced by the local address of the RTSP server.
pipe reads data from app stdout in different formats: MJPEG, H.264/H.265 bitstream, MPEG-TS. Also pipe can write data to app stdin in two formats: PCMA and PCM/48000.
The source can be used with:
FFmpeg - go2rtc ffmpeg source just a shortcut to exec source
FFplay - play audio on your server
GStreamer
Raspberry Pi Cameras
any your own software
Pipe commands support parameters (format: exec:{command}#{param1}#{param2}
):
killsignal
- signal which will be send to stop the process (numeric form)
killtimeout
- time in seconds for forced termination with sigkill
backchannel
- enable backchannel for two-way audio
streams: stream: exec:ffmpeg -re -i /media/BigBuckBunny.mp4 -c copy -rtsp_transport tcp -f rtsp {output} picam_h264: exec:libcamera-vid -t 0 --inline -o - picam_mjpeg: exec:libcamera-vid -t 0 --codec mjpeg -o - pi5cam_h264: exec:libcamera-vid -t 0 --libav-format h264 -o - canon: exec:gphoto2 --capture-movie --stdout#killsignal=2#killtimeout=5 play_pcma: exec:ffplay -fflags nobuffer -f alaw -ar 8000 -i -#backchannel=1 play_pcm48k: exec:ffplay -fflags nobuffer -f s16be -ar 48000 -i -#backchannel=1
Some sources may have a dynamic link. And you will need to get it using a bash or python script. Your script should echo a link to the source. RTSP, FFmpeg or any of the supported sources.
Docker and Hass Add-on users has preinstalled python3
, curl
, jq
.
Check examples in wiki.
streams: apple_hls: echo:python3 hls.py https://developer.apple.com/streaming/examples/basic-stream-osx-ios5.html
New in v1.8.2
Like echo
source, but uses the built-in expr expression language (read more).
Important:
You can use HomeKit Cameras without Apple devices (iPhone, iPad, etc.), it's just a yet another protocol
HomeKit device can be paired with only one ecosystem. So, if you have paired it to an iPhone (Apple Home) - you can't pair it with Home Assistant or go2rtc. Or if you have paired it to go2rtc - you can't pair it with iPhone
HomeKit device should be in same network with working mDNS between device and go2rtc
go2rtc support import paired HomeKit devices from Home Assistant. So you can use HomeKit camera with Hass and go2rtc simultaneously. If you using Hass, I recommend pairing devices with it, it will give you more options.
You can pair device with go2rtc on the HomeKit page. If you can't see your devices - reload the page. Also try reboot your HomeKit device (power off). If you still can't see it - you have a problems with mDNS.
If you see a device but it does not have a pair button - it is paired to some ecosystem (Apple Home, Home Assistant, HomeBridge etc). You need to delete device from that ecosystem, and it will be available for pairing. If you cannot unpair device, you will have to reset it.
Important:
HomeKit audio uses very non-standard AAC-ELD codec with very non-standard params and specification violation
Audio can't be played in VLC
and probably any other player
Audio should be transcoded for using with MSE, WebRTC, etc.
Recommended settings for using HomeKit Camera with WebRTC, MSE, MP4, RTSP:
streams: aqara_g3: - hass:Camera-Hub-G3-AB12 - ffmpeg:aqara_g3#audio=aac#audio=opus
RTSP link with "normal" audio for any player: rtsp://192.168.1.123:8554/aqara_g3?video&audio=aac
This source is in active development! Tested only with Aqara Camera Hub G3 (both EU and CN versions).
New in v1.6.1
Other names: ESeeCloud, dvr163.
you can skip username
, password
, port
, ch
and stream
if they are default
setup separate streams for different channels and streams
streams: camera1: bubble://username:[email protected]:34567/bubble/live?ch=0&stream=0
New in v1.2.0
Other names: DVR-IP, NetSurveillance, Sofia protocol (NETsurveillance ActiveX plugin XMeye SDK).
you can skip username
, password
, port
, channel
and subtype
if they are default
setup separate streams for different channels
use subtype=0
for Main stream, and subtype=1
for Extra1 stream
only the TCP protocol is supported
streams: only_stream: dvrip://username:[email protected]:34567?channel=0&subtype=0 only_tts: dvrip://username:[email protected]:34567?backchannel=1 two_way_audio: - dvrip://username:[email protected]:34567?channel=0&subtype=0- dvrip://username:[email protected]:34567?backchannel=1
New in v1.2.0
TP-Link Tapo proprietary camera protocol with two way audio support.
stream quality is the same as RTSP protocol
use the cloud password, this is not the RTSP password! you do not need to add a login!
you can also use UPPERCASE MD5 hash from your cloud password with admin
username
some new camera firmwares requires SHA256 instead of MD5
streams: # cloud password without username camera1: tapo://[email protected] # admin username and UPPERCASE MD5 cloud-password hash camera2: tapo://admin:[email protected] # admin username and UPPERCASE SHA256 cloud-password hash camera3: tapo://admin:[email protected]
echo -n "cloud password" | md5 | awk '{print toupper($0)}'echo -n "cloud password" | shasum -a 256 | awk '{print toupper($0)}'
New in v1.7.0
TP-Link Kasa non-standard protocol more info.
username
- urlsafe email, [email protected]
-> alex%40gmail.com
password
- base64password, secret1
-> c2VjcmV0MQ==
streams: kc401: kasa://username:[email protected]:19443/https/stream/mixed
Tested: KD110, KC200, KC401, KC420WS, EC71.
New in v1.8.3
Support streaming from GoPro cameras, connected via USB or Wi-Fi to Linux, Mac, Windows. Read more.
Support public cameras from service Ivideon.
streams: quailcam: ivideon:100-tu5dkUPct39cTp9oNEN2B6/0
Support import camera links from Home Assistant config files:
Generic Camera, setup via GUI
HomeKit Camera
ONVIF
Roborock vacuums with camera
hass: config: "/config" # skip this setting if you Hass Add-on userstreams: generic_camera: hass:Camera1 # Settings > Integrations > Integration Name aqara_g3: hass:Camera-Hub-G3-AB12
WebRTC Cameras (from v1.6.0)
Any cameras in WebRTC format are supported. But at the moment Home Assistant only supports some Nest cameras in this fomat.
Important. The Nest API only allows you to get a link to a stream for 5 minutes. Do not use this with Frigate! If the stream expires, Frigate will consume all available ram on your machine within seconds. It's recommended to use Nest source - it supports extending the stream.
streams: # link to Home Assistant Supervised hass-webrtc1: hass://supervisor?entity_id=camera.nest_doorbell # link to external Hass with Long-Lived Access Tokens hass-webrtc2: hass://192.168.1.123:8123?entity_id=camera.nest_doorbell&token=eyXYZ...
RTSP Cameras
By default, the Home Assistant API does not allow you to get dynamic RTSP link to a camera stream. So more cameras, like Tuya, and possibly others can also be imported by using this method.
New in v1.3.0
This source type support only backchannel audio for Hikvision ISAPI protocol. So it should be used as second source in addition to the RTSP protocol.
streams: hikvision1: - rtsp://admin:[email protected]:554/Streaming/Channels/101- isapi://admin:[email protected]:80/
New in v1.6.0
Currently only WebRTC cameras are supported.
For simplicity, it is recommended to connect the Nest/WebRTC camera to the Home Assistant. But if you can somehow get the below parameters - Nest/WebRTC source will work without Hass.
streams: nest-doorbell: nest:?client_id=***&client_secret=***&refresh_token=***&project_id=***&device_id=***
New in v1.3.0
This source type support Roborock vacuums with cameras. Known working models:
Roborock S6 MaxV - only video (the vacuum has no microphone)
Roborock S7 MaxV - video and two way audio
Roborock Qrevo MaxV - video and two way audio
Source support load Roborock credentials from Home Assistant custom integration or the core integration. Otherwise, you need to log in to your Roborock account (MiHome account is not supported). Go to: go2rtc WebUI > Add webpage. Copy roborock://...
source for your vacuum and paste it to go2rtc.yaml
config.
If you have graphic pin for your vacuum - add it as numeric pin (lines: 123, 456, 789) to the end of the roborock-link.
New in v1.3.0
This source type support four connection formats.
whep
WebRTC/WHEP - is replaced by WebRTC/WISH standard for WebRTC video/audio viewers. But it may already be supported in some third-party software. It is supported in go2rtc.
go2rtc
This format is only supported in go2rtc. Unlike WHEP it supports asynchronous WebRTC connection and two way audio.
openipc (from v1.7.0)
Support connection to OpenIPC cameras.
wyze (from v1.6.1)
Supports connection to Wyze cameras, using WebRTC protocol. You can use docker-wyze-bridge project to get connection credentials.
kinesis (from v1.6.1)
Supports Amazon Kinesis Video Streams, using WebRTC protocol. You need to specify signalling WebSocket URL with all credentials in query params, client_id
and ice_servers
list in JSON format.
streams: webrtc-whep: webrtc:http://192.168.1.123:1984/api/webrtc?src=camera1 webrtc-go2rtc: webrtc:ws://192.168.1.123:1984/api/ws?src=camera1 webrtc-openipc: webrtc:ws://192.168.1.123/webrtc_ws#format=openipc#ice_servers=[{"urls":"stun:stun.kinesisvideo.eu-north-1.amazonaws.com:443"}] webrtc-wyze: webrtc:http://192.168.1.123:5000/signaling/camera1?kvs#format=wyze webrtc-kinesis: webrtc:wss://...amazonaws.com/?...#format=kinesis#client_id=...#ice_servers=[{...},{...}]
PS. For kinesis
sources you can use echo to get connection params using bash
/python
or any other script language.
New in v1.3.0
This source can get a stream from another go2rtc via WebTorrent protocol.
streams: webtorrent1: webtorrent:?share=huofssuxaty00izc&pwd=k3l2j9djeg8v8r7e
By default, go2rtc establishes a connection to the source when any client requests it. Go2rtc drops the connection to the source when it has no clients left.
Go2rtc also can accepts incoming sources in RTSP, RTMP, HTTP and WebRTC/WHIP formats
Go2rtc won't stop such a source if it has no clients
You can push data only to existing stream (create stream with empty source in config)
You can push multiple incoming sources to same stream
You can push data to non empty stream, so it will have additional codecs inside
Examples
RTSP with any codec
ffmpeg -re -i BigBuckBunny.mp4 -c copy -rtsp_transport tcp -f rtsp rtsp://localhost:8554/camera1
HTTP-MJPEG with MJPEG codec
ffmpeg -re -i BigBuckBunny.mp4 -c mjpeg -f mpjpeg http://localhost:1984/api/stream.mjpeg?dst=camera1
HTTP-FLV with H264, AAC codecs
ffmpeg -re -i BigBuckBunny.mp4 -c copy -f flv http://localhost:1984/api/stream.flv?dst=camera1
MPEG-TS with H264 codec
ffmpeg -re -i BigBuckBunny.mp4 -c copy -f mpegts http://localhost:1984/api/stream.ts?dst=camera1
New in v1.3.0
You can turn the browser of any PC or mobile into an IP-camera with support video and two way audio. Or even broadcast your PC screen:
Create empty stream in the go2rtc.yaml
Go to go2rtc WebUI
Open links
page for you stream
Select camera+microphone
or display+speaker
option
Open webrtc
local page (your go2rtc should work over HTTPS!) or share link
via WebTorrent technology (work over HTTPS by default)
New in v1.3.0
You can use OBS Studio or any other broadcast software with WHIP protocol support. This standard has not yet been approved. But you can download OBS Studio dev version:
Settings > Stream > Service: WHIP > http://192.168.1.123:1984/api/webrtc?dst=camera1
New in v1.3.0
go2rtc support play audio files (ex. music or TTS) and live streams (ex. radio) on cameras with two way audio support (RTSP/ONVIF cameras, TP-Link Tapo, Hikvision ISAPI, Roborock vacuums, any Browser).
API example:
POST http://localhost:1984/api/streams?dst=camera1&src=ffmpeg:http://example.com/song.mp3#audio=pcma#input=file
you can stream: local files, web files, live streams or any format, supported by FFmpeg
you should use ffmpeg source for transcoding audio to codec, that your camera supports
you can check camera codecs on the go2rtc WebUI info page when the stream is active
some cameras support only low quality PCMA/8000
codec (ex. Tapo)
it is recommended to choose higher quality formats if your camera supports them (ex. PCMA/48000
for some Dahua cameras)
if you play files over http-link, you need to add #input=file
params for transcoding, so file will be transcoded and played in real time
if you play live streams, you should skip #input
param, because it is already in real time
you can stop active playback by calling the API with the empty src
parameter
you will see one active producer and one active consumer in go2rtc WebUI info page during streaming
New in v1.8.0
You can publish any stream to streaming services (YouTube, Telegram, etc.) via RTMP/RTMPS. Important:
Supported codecs: H264 for video and AAC for audio
AAC audio is required for YouTube, videos without audio will not work
You don't need to enable RTMP module listening for this task
You can use API:
POST http://localhost:1984/api/streams?src=camera1&dst=rtmps://...
Or config file:
publish: # publish stream "video_audio_transcode" to Telegram video_audio_transcode: - rtmps://xxx-x.rtmp.t.me/s/xxxxxxxxxx:xxxxxxxxxxxxxxxxxxxxxx # publish stream "audio_transcode" to Telegram and YouTube audio_transcode: - rtmps://xxx-x.rtmp.t.me/s/xxxxxxxxxx:xxxxxxxxxxxxxxxxxxxxxx- rtmp://xxx.rtmp.youtube.com/live2/xxxx-xxxx-xxxx-xxxx-xxxxstreams: video_audio_transcode: - ffmpeg:rtsp://user:[email protected]/stream1#video=h264#hardware#audio=aac audio_transcode: - ffmpeg:rtsp://user:[email protected]/stream1#video=copy#audio=aac
Telegram Desktop App > Any public or private channel or group (where you admin) > Live stream > Start with... > Start streaming.
YouTube > Create > Go live > Stream latency: Ultra low-latency > Copy: Stream URL + Stream key.
The HTTP API is the main part for interacting with the application. Default address: http://localhost:1984/
.
Important! go2rtc passes requests from localhost and from unix socket without HTTP authorisation, even if you have it configured! It is your responsibility to set up secure external access to API. If not properly configured, an attacker can gain access to your cameras and even your server.
API description.
Module config
you can disable HTTP API with listen: ""
and use, for example, only RTSP client/server protocol
you can enable HTTP API only on localhost with listen: "127.0.0.1:1984"
setting
you can change API base_path
and host go2rtc on your main app webserver suburl
all files from static_dir
hosted on root path: /
you can use raw TLS cert/key content or path to files
api: listen: ":1984" # default ":1984", HTTP API port ("" - disabled) username: "admin" # default "", Basic auth for WebUI password: "pass" # default "", Basic auth for WebUI base_path: "/rtc" # default "", API prefix for serve on suburl (/api => /rtc/api) static_dir: "www" # default "", folder for static files (custom web interface) origin: "*" # default "", allow CORS requests (only * supported) tls_listen: ":443" # default "", enable HTTPS server tls_cert: | # default "", PEM-encoded fullchain certificate for HTTPS -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- tls_key: | # default "", PEM-encoded private key for HTTPS -----BEGIN PRIVATE KEY----- ... -----END PRIVATE KEY----- unix_listen: "/tmp/go2rtc.sock" # default "", unix socket listener for API
PS:
MJPEG over WebSocket plays better than native MJPEG because Chrome bug
MP4 over WebSocket was created only for Apple iOS because it doesn't support MSE and native MP4
You can get any stream as RTSP-stream: rtsp://192.168.1.123:8554/{stream_name}
You can enable external password protection for your RTSP streams. Password protection always disabled for localhost calls (ex. FFmpeg or Hass on same server).
rtsp: listen: ":8554" # RTSP Server TCP port, default - 8554 username: "admin" # optional, default - disabled password: "pass" # optional, default - disabled default_query: "video&audio" <sp