Earlier in WebRTC Server Setup, we set up the server environment required by WebRTC, mainly three servers: room server, signaling server, and TURN penetration server.

We will learn how to use WebRTC step by step to implement audio and video calls. Today we will learn how to preview camera data using WebRTC.

To be clear, most of the practices in the following learning process are based on the official WebRTC encapsulation library, so most of the code is Java or Kotlin, and JNI related code will not be involved for the time being, so the threshold is very low.

Good Good study, day day up. So easy…

Introducing dependency libraries

WebRTC libraries are available in the Android Studio project:

Implementation 'org. Webrtc: Google - webrtc: 1.0 +'Copy the code

Dynamic permissions

CAMERA permission is required first, and RECORD_AUDIO is required if audio data is required.

For those who believe that they have the foundation of Android development, dynamic permissions are no stranger. GitHub also has many related open source libraries, which I will not introduce here.

Preview camera

As a complete solution for point-to-point communication, WebRTC has completed a complete package for camera data acquisition and preview. Developers can directly call the relevant API, without developers writing OpenGL texture rendering and other related logic code.

If you are interested in using OpenGL to render camera data, please refer to my previous article: Preview CameraX Camera Data with OpenGL.

There are several steps to preview camera data using WebRTC:

Create EglBase and SurfaceViewRenderer

An important function of EglBase is to provide EGL rendering context and EGL version compatibility.

The SurfaceViewRenderer is a rendering View inherited from the SurfaceView, providing OpenGL rendering of image data.

Create () camerA_preview. Init (rootEglBase? .eglBaseContext, Null) / / hardware acceleration camera_preview. SetEnableHardwareScaler (true) camera_preview.setScalingType(RendererCommon.ScalingType.SCALE_ASPECT_FILL)Copy the code

2. Create VideocapStorm

That’s what’s expected of the camera, video Ocapstorm. That’s what’s expected of the camera, resolution and frame rate.

Compatible with Camera1 and Camera2 through the CameraEnumerator interface:

Private Fun createVideoCapture(): Received a storm of applause? { return if (Camera2Enumerator.isSupported(this)) { createCameraCapture(Camera2Enumerator(this)) } else { CreateCameraCapture (Camera1Enumerator(true))} received a storm of applause. Private Fun createCameraCapture(Camera1Enumerator) CameraEnumerator): VideoCapturer? { val deviceNames = enumerator.deviceNames // First, try to find front facing camera for (deviceName in deviceNames) { if (enumerator.isFrontFacing(deviceName)) { val videoCapture: VideoCapturer? = enumerator.createCapturer(deviceName, null) if (videoCapture ! = null) { return videoCapture } } } for (deviceName in deviceNames) { if (! enumerator.isFrontFacing(deviceName)) { val videoCapture: VideoCapturer? = enumerator.createCapturer(deviceName, null) if (videoCapture ! = null) { return videoCapture } } } return null }Copy the code

Received a storm of applause. Received a storm of applause. Received a storm of applause. Received a storm of applause. Received a storm of applause.

/ / initialize mSurfaceTextureHelper = SurfaceTextureHelper. Create (" rootEglBase CaptureThread "? .eglBasecontext) // Create VideoSource val VideoSource = mPeerConnectionFactory!! .createVideoSource(false) mVideoCapture? .initialize( mSurfaceTextureHelper, applicationContext, VideoSource. CapturerObserver)/create PeerConnectionFactory * * * * / private fun createPeerConnectionFactory (context: Context?) : PeerConnectionFactory? { val encoderFactory: VideoEncoderFactory val decoderFactory: VideoDecoderFactory encoderFactory = DefaultVideoEncoderFactory( rootEglBase? .eglBaseContext, false /* enableIntelVp8Encoder */, true ) decoderFactory = DefaultVideoDecoderFactory(rootEglBase? .eglBaseContext) PeerConnectionFactory.initialize( PeerConnectionFactory.InitializationOptions.builder(context) .setEnableInternalTracer(true) .createInitializationOptions() ) val builder = PeerConnectionFactory.builder() .setVideoEncoderFactory(encoderFactory) .setVideoDecoderFactory(decoderFactory) builder.setOptions(null) return builder.createPeerConnectionFactory() }Copy the code

Create VideoTrack

That’s what’s expected of you. That’s what’s expected of you. VideoTrack is the VideoTrack, that’s what’s expected of you.

Val VIDEO_TRACK_ID = "1" //"ARDAMSv0" // create VideoTrack mVideoTrack = mPeerConnectionFactory!! .createVideoTrack(VIDEO_TRACK_ID, videoSource ) mVideoTrack? .setenabled (true) // Bind render View mVideoTrack? .addSink(camera_preview)Copy the code

Women use Videocapstorm to start rendering

Enable preview in the Activity’s related lifecycle:

Override fun onResume() {super.onresume () // Start mVideoCapture? .startCapture( VIDEO_RESOLUTION_WIDTH, VIDEO_RESOLUTION_HEIGHT, VIDEO_FPS ) }Copy the code

The complete code

CapturePreviewActivity.kt:

package com.fly.webrtcandroid import android.content.Context import androidx.appcompat.app.AppCompatActivity import Android.os. Bundle import org.webrtc.* /** * Camera Preview */ class CapturePreviewActivity: AppCompatActivity() { val VIDEO_TRACK_ID = "1" //"ARDAMSv0" private val VIDEO_RESOLUTION_WIDTH = 1280 private val VIDEO_RESOLUTION_HEIGHT = 720 private val VIDEO_FPS = 30 private var rootEglBase: EglBase? = null private var mVideoTrack: VideoTrack? = null private var mPeerConnectionFactory: PeerConnectionFactory? = null // Texture render private var mSurfaceTextureHelper: SurfaceTextureHelper? = null private var mVideoCapture: VideoCapturer? = null private val camera_preview by lazy { findViewById<SurfaceViewRenderer>(R.id.camera_preview) } override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_capture_preview) rootEglBase = EglBase.create() camera_preview.init(rootEglBase? .eglBaseContext, Null) / / suspended top camera_preview. SetZOrderMediaOverlay (true) / / hardware acceleration camera_preview. SetEnableHardwareScaler (true) camera_preview.setScalingType(RendererCommon.ScalingType.SCALE_ASPECT_FILL) mPeerConnectionFactory = CreatePeerConnectionFactory (this) mVideoCapture = createVideoCapture () / / initialization mSurfaceTextureHelper = SurfaceTextureHelper.create("CaptureThread", rootEglBase? .eglBasecontext) // Create VideoSource val VideoSource = mPeerConnectionFactory!! .createVideoSource(false) mVideoCapture? .initialize( mSurfaceTextureHelper, applicationContext, videoSource.capturerObserver ) mVideoTrack = mPeerConnectionFactory!! .createVideoTrack(VIDEO_TRACK_ID, videoSource ) mVideoTrack? .setEnabled(true) mVideoTrack? . AddSink (camera_preview)} / create PeerConnectionFactory * * * * / private fun createPeerConnectionFactory (context: context? : PeerConnectionFactory? { val encoderFactory: VideoEncoderFactory val decoderFactory: VideoDecoderFactory encoderFactory = DefaultVideoEncoderFactory( rootEglBase? .eglBaseContext, false /* enableIntelVp8Encoder */, true ) decoderFactory = DefaultVideoDecoderFactory(rootEglBase? .eglBaseContext) PeerConnectionFactory.initialize( PeerConnectionFactory.InitializationOptions.builder(context) .setEnableInternalTracer(true) .createInitializationOptions() ) val builder = PeerConnectionFactory.builder() .setVideoEncoderFactory(encoderFactory) .setVideoDecoderFactory(decoderFactory) builder.setOptions(null) return builder.createPeerConnectionFactory() } private fun createVideoCapture(): VideoCapturer? { return if (Camera2Enumerator.isSupported(this)) { createCameraCapture(Camera2Enumerator(this)) } else { createCameraCapture(Camera1Enumerator(true)) } } private fun createCameraCapture(enumerator: CameraEnumerator): VideoCapturer? { val deviceNames = enumerator.deviceNames // First, try to find front facing camera for (deviceName in deviceNames) { if (enumerator.isFrontFacing(deviceName)) { val videoCapture: VideoCapturer? = enumerator.createCapturer(deviceName, null) if (videoCapture ! = null) { return videoCapture } } } for (deviceName in deviceNames) { if (! enumerator.isFrontFacing(deviceName)) { val videoCapture: VideoCapturer? = enumerator.createCapturer(deviceName, null) if (videoCapture ! = null) {return videoCapture}} return null} Override fun onResume() {super.onresume () // Start camera preview mVideoCapture? .startCapture( VIDEO_RESOLUTION_WIDTH, VIDEO_RESOLUTION_HEIGHT, VIDEO_FPS ) } }Copy the code

Layout file activity_capture_preview.xml:

<? The XML version = "1.0" encoding = "utf-8"? > <org.webrtc.SurfaceViewRenderer xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:id="@+id/camera_preview" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".CapturePreviewActivity"> </org.webrtc.SurfaceViewRenderer>Copy the code

Pay attention to me, progress together, life is more than coding!!