React Native WebRTC

*React Native and WebRTC Real time communication on mobile

Hey, great to be here in beautiful Krakow, where the weather is definitely better than what I’m used to as a Finn. So my name is Perttu Lähteenlahti, I’m originally from Helsinki Finland, but nowadays I've spent time kinda around the world. I'm a software engineer, focusing on web and mobile, with React native being the technology I enjoy the most using. I used to be a designer with a degree in cognitive neuroscience before all this, and before that I was a carpenter, sometimes I still am, but only as a hobby.

So today I’m going to talk to you about real time media on mobile devices. As mentioned, I’m a mobile developer so this talk will be from the point of view of a mobile developer. Now I know most of you are not probably mobile developers so I am to give you the least amount of mobile code that is needed, and most of the code is going to be there to just highlight either how complex, or how simple something can be.

*What this talks is going to be about

I’m first going to talk about why real time, and why especially on mobile. After that we are gonna dive right into the technology of making it work on mobile, in this case that will be React Native, since our motivation for doing real time on mobile is of course to build it for as many clients as possible. Because we are going to be using React native, I’m going to be focusing on one single package that makes WebRTC possible in React Native, the react native webrtc. After I shown how little code is required for getting started I’m going to then talk about the current limitations of doing it like this on mobile, what types of improvements are in the works and some things you might need to consider if you consider going outside the normal use case of video calling with React Native WebRTC.

So let’s kick things off. For starters I know most of you here know what webrtc stands for but just for sanity let’s establish a few things. So WebRTC is a free and open source protocol and collection of APIs that provides web browsers and mobile apps with real-time communication. Depending how you look at it, it's been around since 2010. It’s well supported and keeps growing every year.

*Why WebRTC on mobile

And why do we want to do WebRTC on mobile? If we really think about it, real time communication is rarely anymore a business case for any companies, rather it’s something users and customers have come to expect. Despite that everyone hates calling, we still want to be able to call our doctor when needed. It’s hard to find chat apps that don’t have video or audio functionality. Content creation is also becoming more and more real time all the time, with streaming services being one example.

And that’s where mobile as a platform really shines. Mobile phones are the devices the majority of humans use to stay in contact with the world, and real time communication has been one the original use case of mobile phones since they launched. Based on what I’ve just said, I think that mobile should be the first platform you target when you start building something that will fundamentally rely on real time communication.

*React Native

WebRTC on the web is relatively easy to get started with, it’s a wonderful technology and doesn’t have to be hard on mobile either. Thanks to React Native. In case you’re not familiar with React Native, the pessimistic view of it this:

“Take one of the most divisive programming languages on the planet, meaning JavaScript and use it to build native mobile apps”

But I like React Native, I find it to be very quick when you want something in the app stores. Despite its weirdness, it’s also in my opinion somewhat pleasurable to work with. However that probably doesn’t sell React Native to you so instead I’m going to use the following key selling point:

  • It’s truly cross-platform, meaning you can build for ios, android, TVs, cars, 3D headsets
  • You can take a lot of knowledge from web context and translate it work in mobile context
  • It’s never far from the native code. Unlike flutter which is very uncoupled from the native APis

So consider React native project for your next mobile project.

*How to get started We bootstrap a project with Expo, which is a framework for React native. If you’re unfamiliar with Expo the simplest way to describe it is to say that it just bundles a lot of good stuff so you can focus on solving the actual business case. It makes for example it a lot easier to Use native APIs Deploy the app Build something fast Now to build an expo app you only need to run npx create-expo-app@latest And then run it with

On Mac you can now run npx expo start and build your app for iOS and Android. Very basic stuff so very good that we don’t have to spend that much time on this.

Now we have one small thing we need to take in to consideration, we need to run npx expo prebuild so that we can start using the native code parts as well.

Next we need to think about how to integrate WebRTC to our project. We thankfully don’t have to look far since in react-native it is usually enough to just write “react native” and the technology you’re interested in, so let’s do react-native-webrtc and lo and behold, we have react native webrtc package!

*React Native WebRTC Most of the functionality that react-native-webrtc provides is identical to that of web. So for example these are most likely familiar to you if you’ve worked on WebRTC before. If not, worry not, let’s look at some basic things:

import {
	ScreenCapturePickerView,
	RTCPeerConnection,
	RTCIceCandidate,
	RTCSessionDescription,
	RTCView,
	MediaStream,
	MediaStreamTrack,
	mediaDevices,
	registerGlobals
} from 'react-native-webrtc';

Getting local stream WebRTC provides standard APIs for accessing cameras and microphones on computers and smartphones, same goes for React Native WebRTC. We can access these devices through the get mediaDevices API:

const mediaStream = await mediaDevices.getUserMedia();

By default react-native-webrtc expects us to send both audio and video. To limit this we can define media constraints:

let mediaConstraints = {
	audio: true,
	video: {
		frameRate: 30,
		facingMode: 'user'
	}
};

const mediaStream = await mediaDevices.getUserMedia( mediaConstraints );

If we want to share our screen we can use

const mediaStream = await mediaDevices.getDisplayMedia();

To actually display anything we can make use of the <RTCView /> component to render both local and remote streams:

And there we have it, a very quick, but working implementation of WebRTC on React Native.

  • Take these if you’re building a video calling app

react-native-webrtc provides the basic functionality for handling stuff inside the app But you will need these:

react-native-incall-manager: manage devices events like wired-headset plugged-in state react-native-callkeep:integrate your app with the native call screens

*WebRTC on all the platforms

React native WebRTC works on mobile very nicely. You hardly have to make any platform specific changes to support for iOS and Android. Basically only the permissions for using cameras and microphones between the platforms is different. Performance is also good, with lower end Androids causing some problems but in mobile world that can be in general be considered a unsolvable problem.

But mobile isn’t everything. We can for example build a React native app for Apple’s tvOS and add react-native-webrtc there.

However I personally find the most interesting option to use react-native-web and the react-native-webrtc-web-shim. React native web is implementation of React native components and APIs on the web. This will technically allow you to start with building a mobile ios app using React native, then start supporting Android, and finally include web as well, using the same code bases for all three, having the same WebRTC setup on all three. So building something mobile first.

*React Native WebRTC knows limitations So I hope I’ve been able to at this convenience to use React native webrtc in your next mobile projects, and now I will shoot your big dreams down. Not fully, but a little bit.

First, react-native-webrtc is pretty much purely focused on being an audio and video calling library. In the project I worked on we were using the package to show streaming content, for which this package works very well as long as you’re willing to do a bit of extra work. Since calling is so tightly knitted into the package, it will always ask you to give microphone permissions. Even if you’re just playing a remote stream. There’s a lot of comments about this in Github, and most of them have been marked as being outside of the scope of the project. In our case we had to maintain our own version of React native webrtc which had customised the instance so that it would not ask for microphone permissions.

Second, react-native-webrtc does not yet have pip-support. On mobile you can easily build a pip support as long as you stay in the app. But as soon as you jump out of it things stop. This is a pretty big limiting factor. However there’s light in the horizon as just last week there was PR opened on picture in picture implementation in iOS.