Creating a camera preview with WebRTC is easy in less than 50 lines of core code.

WebRTC dependent version

Use the official version directly, no need to go to additional compilation.


implementation 'org. Webrtc: Google - webrtc: 1.0.30039'

Copy the code

This version will be used for testing later.

Camera Permission Application

WebRTC although powerful, simple code, but does not encapsulate an application permission application interface, which needs to operate themselves.

The camera preview

There’s a joke about how to put an elephant in the fridge, and there’s three steps, open the fridge, put the elephant in, close the fridge.

Create a camera preview with WebRTC and follow the steps above: Open the camera, set reception, and enable preview.

As for the tedious steps in the middle, such as the internal implementation of camera creation, preview drawing of the internal implementation are not to worry about, call good interface, set parameters on the line.

Creating a Camera Instance

What’s expected of WebRTC is that camera instances inherit the Videocapstorm interface, whether it’s Camera1 or Camera2.


public interface VideoCapturer {

void initialize(SurfaceTextureHelper var1, Context var2, CapturerObserver var3);

void startCapture(int var1, int var2, int var3);

void stopCapture(a) throws InterruptedException;

void changeCaptureFormat(int var1, int var2, int var3);

void dispose(a);

boolean isScreencast(a);

}

Copy the code

The interface is also relatively simple, requiring only the camera instance to provide some simple preview capabilities.

The code for creating the camera instance is as follows:


private fun createVideoCapture(a): VideoCapturer? {

val enumerator = Camera1Enumerator(false)

val deviceNames = enumerator.deviceNames

for (deviceName in deviceNames) {

if (enumerator.isFrontFacing(deviceName)) {

val videoCapture = enumerator.createCapturer(deviceName, null)

if(videoCapture ! = null) {return videoCapture

}

}

}

return null

}

Copy the code

Camera1Enumerator is used to enumerate the number of cameras on the device. Generally, there are only two types of cameras: front and rear. You can also use Camera2Enumerator to obtain the camera call of Camera2.

DeviceNames corresponds to the getDeviceNames method, but kotlin has shortened it to represent a collection of cameras on the device. This interface is already shielded from the implementation of retrieving different cameras inside Camera1 and Camera2.

When the contextual conditions are met, createcapStorm is expected to create a camera instance.

Camera preview reception

Corresponding components are required to receive the camera’s output and display it on the screen.

The control that appears on the screen is neither a SurfaceView nor a TextureView, but a control that WebRTC wraps itself, the SurfaceViewRenderer.

It basically inherits the SurfaceView and has a SurfaceEglRenderer variable inside that draws the video oframe passed in from the outside onto the screen.


<org.webrtc.SurfaceViewRenderer android:id="@+id/localView"

android:layout_width="match_parent"

android:layout_height="match_parent"/>

// Render the SurfaceViewRenderer

public void onFrame(VideoFrame frame) {

this.eglRenderer.onFrame(frame);

}

Copy the code

The SurfaceEglRenderer is also previewed by OpenGL rendering, and the OpenGL environment can decide whether to create it as a ShareContext.


val eglBaseContext = EglBase.create().eglBaseContext

localView.init(eglBaseContext, null)

Copy the code

The component that receives the camera preview stream is the SurfaceTexture, but WebRTC wraps it in the SurfaceTextureHelper variable.

To create SurfaceTextureHelper, do the following:


val eglBaseContext = EglBase.create().eglBaseContext

val surfaceTextureHelper = surfaceTextureHelper.create("CaptureThread", eglBaseContext)

Copy the code

The SurfaceTextureHelper creates a thread internally and can also pass an EGLContext externally to determine whether to make a ShareContext call.

A camera instance VideoCapturer and receive SurfaceTextureHelper preview of the components, can they associate:

videoCapture? .initialize(surfaceTextureHelper, applicationContext, videoSource? .capturerObserver) videoCapture? .startCapture(480.640.30)

Copy the code

VideoCapture calls the Initialize method to achieve the connection between the two methods, while startCapture method determines the camera capture width and height and frame rate.

Enable camera Preview

When you start camera preview, you need to get involved with WebRTC.

WebRTC itself is used for instant communication, it abstracts audio and video streams into a track MediaStreamTrack, there are AudioTrack AudioTrack and VideoTrack VideoTrack.

And the content source on the track corresponds to the MedisSource, the AudioSource AudioSource and the VideoSource VideoSource.

That’s what’s expected of the camera output. That’s what’s expected of the VideoSource. That’s what’s expected of the camera output.

In the code above, the initialize method actually establishes the association.

videoSource = videoCapture? .isScreencast? .let { factory.createVideoSource(it) } videoCapture? .initialize(surfaceTextureHelper, applicationContext, videoSource? .capturerObserver)Copy the code

The last parameter of the Initialize method is a callback, which is expected of a capturerObserver. That’s what’s expected of VideoSource.

Create the videoSource Factory, which corresponds to an im end-to-end connection, and videoTrack and audioTrack are the contents of this connection.

The code for creating a factory is fairly fixed:


val options = PeerConnectionFactory.InitializationOptions.builder(this).createInitializationOptions(a); PeerConnectionFactory.initialize(options)

factory = PeerConnectionFactory.builder().createPeerConnectionFactory(a)Copy the code

To create VideoTrack, you need to associate the video source with the VideoTrack.


videoTrack = factory.createVideoTrack("101",videoSource)

Copy the code

With all the creation and association done, you can start the preview. You need to display the video track content on the screen, which is the SurfaceViewRenderer control above.

videoTrack? .addSink(localView)

Copy the code

Examples of complete code:


class CameraActivity : AppCompatActivity() {

private lateinit var factory: PeerConnectionFactory

private var videoCapture:VideoCapturer? = null

private var videoSource: VideoSource? = null

private var videoTrack: VideoTrack? = null

private lateinit var localView:SurfaceViewRenderer

override fun onCreate(savedInstanceState: Bundle?) {

super.onCreate(savedInstanceState)

setContentView(R.layout.activity_camera)

localView = findViewById(R.id.localView)

val options = PeerConnectionFactory.InitializationOptions.builder(this).createInitializationOptions(a); PeerConnectionFactory.initialize(options)

factory = PeerConnectionFactory.builder().createPeerConnectionFactory()

val eglBaseContext = EglBase.create().eglBaseContext

val surfaceTextureHelper = SurfaceTextureHelper.create("CaptureThread", eglBaseContext)

videoCapture = createVideoCapture() videoSource = videoCapture? .isScreencast? .let { factory.createVideoSource(it) } videoCapture? .initialize(surfaceTextureHelper, applicationContext, videoSource? .capturerObserver) videoCapture? .startCapture(480.640.30)

localView.setMirror(true)

localView.init(eglBaseContext, null)

videoTrack = factory.createVideoTrack("101",videoSource) videoTrack? .addSink(localView)

}

private fun createVideoCapture(): VideoCapturer? {

val enumerator = Camera1Enumerator(false)

val deviceNames = enumerator.deviceNames

for (deviceName in deviceNames) {

if (enumerator.isFrontFacing(deviceName)) {

val videoCapture = enumerator.createCapturer(deviceName, null)

if(videoCapture ! = null) {return videoCapture

}

}

}

return null

}

}

Copy the code

The camera preview was completed in less than 50 lines of code, and the Github repository address will be given later.

That’s all for this article, which is being updated

Collection of articles

WebRTC series:

  1. # WebRTC & Android development learning environment construction ~