SearchCtrl K




All things video.

Full glossary


What is WebRTC?

WebRTC is a set of open source protocols tha facilitate the sharing of audio and video in near real time. These protocols that have been adopted by web browsers and applications, allowing for nearly ubiquitous ability to use WebRTC to communicate over the internet. The connections are peer-to-peer (P2P) rather than sent and aggregated by a central location.

Are there advantages to P2P communication?

Yes, a Peer-to-peer connection means that the video and audio is shared to each device communicating in the conversation. This keeps the communictaion latency very low. However, as the number of attendees increase, the number of video/audio connections can reach bandwidth limits.

What does WebRTC stand for?

WebRTC is an abbrevation of "Web Real Time Communication."

What are the APIs in WebRTC

WEbRTC offers a number of APIs that make creating a P2P communication network easy. They fall into video/audio capture, transmission, sharing and statistics.

  1. getUserMedia is used by the browser to connect to cameras and microphones on the device. it can be used to set the size and bitrate of each feed being collected. This is often what is called when your browser asks if it can use your camera and/or microphone. getUserMedia is one of the main APIs used in livestream.a.video and record.a.video
  2. RTCPeerConnection is used to provide the P2P connectivity: security, bandwidth and codecs are all managed here to ensure that the connectivity between the peers is robust.
  3. RTCDataChannel is similar to a websocket for low latency data transfer. The RTCPeerConnection handles all the audio/visual, and the RTCDataChannel handles the rest - file transfers, chat communication, etc.

WebRTC at api.video

We also use the getUserMedia to gather camera and mic streams in the browser to power our live streaming demos (livestream.a.video and record.a.video are two examples.)

We do not currently support the WebRTC protocol to transmit video streams. Our live streaming endpoints utilize HLS for streaming. There are pros and cons to this approach. The latency is (currently) about 15s. but should drop to 3-5s as we adopt LL-HLS. WebRTC has latency on the 100s of milliseconds - clearly much more "real-time." However, HLS (and LL-HLS) can broadcast to thousands (and hundreds of thousands) of viewers - a scale that is unreachable by WebRTC.

We do offer connectors to WebRTC providers, so your WebRTC communication can be aggregated and livestreamed. This is a great comprimise - the active participants can converse in realtime, while the viewers can still see the video "live".