Super simple, add beauty filter to Android WebRTC

  • Video capture rendering process analysis

Before adding filters, you need to have some understanding of the WebRTC video capture process.

WebRTC defines the VideoCapture interface class, which defines the camera initialization, preview, stop preview destruction and other operations.

The implementation class is CameraCapture, and encapsulates Camera1Capture, Camera2Capture two subclasses, and even screen sharing.

WebRTC to start video capture is very simple:

valvideoCapture = createVideoCapture() videoSource = videoCapture.isScreencast.let { factory.createVideoSource(it) } videoCapture.initialize(surfaceTextureHelper,applicationContext,videoSource? .capturerObserver) videoCapture.startCapture(480.640.30)
Copy the code

Here we’ll focus on the VideoSource class and the capturerObserver.

The following methods are available in VideoSource

@Override
    public void onFrameCaptured(VideoFrame frame) {
      final VideoProcessor.FrameAdaptationParameters parameters =
          nativeAndroidVideoTrackSource.adaptFrame(frame);
      synchronized (videoProcessorLock) {
        if(videoProcessor ! =null) {
          videoProcessor.onFrameCaptured(frame, parameters);
          return;
        }
      }
      VideoFrame adaptedFrame = VideoProcessor.applyFrameAdaptationParameters(frame, parameters);
      if(adaptedFrame ! =null) { nativeAndroidVideoTrackSource.onFrameCaptured(adaptedFrame); adaptedFrame.release(); }}Copy the code

Video frame of data gathered will give onFrameCaptured callback, here will do the cutting scaling of video processing, and through nativeAndroidVideoTrackSource passed to Native layer.

The focus is on the VideoProcessor object, which was added in February 2019. In The VideoSource, the setVideoProcessor method is used to set the VideoProcessor. In the above method, we know that if the VideoProcessor is set, the video frames will follow the onFrameCaptured of the VideoProcessor. Otherwise it goes straight to Native.

It’s very convenient to use VideoProcessor to process video frames before sending them, so let’s take a look at the VideoProcessor class.

public interface VideoProcessor extends CapturerObserver { public static class FrameAdaptationParameters { ... public FrameAdaptationParameters(int cropX, int cropY, int cropWidth, int cropHeight, int scaleWidth, int scaleHeight, long timestampNs, boolean drop) { ... } } default void onFrameCaptured(VideoFrame frame, FrameAdaptationParameters parameters) { VideoFrame adaptedFrame = applyFrameAdaptationParameters(frame, parameters); if (adaptedFrame ! = null) { onFrameCaptured(adaptedFrame); adaptedFrame.release(); }}... }Copy the code

The onFrameCaptured(frame, parameters) called in VideoSource is not the onFrameCaptured(CapturerObserver), that is, the Native increment will not be introduced temporarily, It’s also doing a clipping and scaling of the ViewFrame in this method, and then passing in the underlying layer.

So we can realize beauty filter processing of video frame here.

 class FilterProcessor : VideoProcessor{
   	
   			private var videoSink:VideoSink
       
        override fun onCapturerStarted(success: Boolean){}override fun onCapturerStopped(a){}override fun onFrameCaptured(frame: VideoFrame?). { 
          val newFrame = // TODO:In this VideoFrame for video filter beauty treatment
          sink.onFrame(newFrame)
        }

        override fun setSink(sink: VideoSink?). {
            // Set up the video receiver to render and pass the frame to Native
          videoSink = sink
        }
    }

val videoCapture = createVideoCapture()
videoSource = videoCapture.isScreencast.let { factory.createVideoSource(it) }
videoSource.setVideoProcessor(FilterProcessor())// Set the handlervideoCapture.initialize(surfaceTextureHelper,applicationContext,videoSource? .capturerObserver) videoCapture.startCapture(480.640.30)
Copy the code

For beauty, you can use GPUImage or commercial SDK.

The above is the implementation of the application layer, using WebRTC own class on the line. The same is true for NDK development.

Create a proxy class CapturerObserverProxy that implements CapturerObserver and pass in the real nativeCapturerObserver. Native calls back the video frame data to the CapturerObserverProxy’s onFrameCaptured, and then beautifying the video in onFrameCaptured, Then the processed video of rame is transmitted to the underlying code by nativeCapturerObserver.

public class CapturerObserverProxy implements CapturerObserver {
    public static final String TAG = CapturerObserverProxy.class.getSimpleName();

    private CapturerObserver originalObserver;
    private RTCVideoEffector videoEffector;

    public CapturerObserverProxy(final SurfaceTextureHelper surfaceTextureHelper,
                                 CapturerObserver observer,
                                 RTCVideoEffector effector) {

        this.originalObserver = observer;
        this.videoEffector = effector;

        final Handler handler = surfaceTextureHelper.getHandler();
        ThreadUtils.invokeAtFrontUninterruptibly(handler, () ->
                videoEffector.init(surfaceTextureHelper)
        );
    }

    @Override
    public void onCapturerStarted(boolean success) {
        this.originalObserver.onCapturerStarted(success);
    }

    @Override
    public void onCapturerStopped(a) {
        this.originalObserver.onCapturerStopped();
    }

    @Override
    public void onFrameCaptured(VideoFrame frame) {
        if (this.videoEffector.needToProcessFrame()) {
            VideoFrame.I420Buffer originalI420Buffer = frame.getBuffer().toI420();
            VideoFrame.I420Buffer effectedI420Buffer =
                    this.videoEffector.processByteBufferFrame(
                            originalI420Buffer, frame.getRotation(), frame.getTimestampNs());

            VideoFrame effectedVideoFrame = new VideoFrame(
                    effectedI420Buffer, frame.getRotation(), frame.getTimestampNs());
            originalI420Buffer.release();
            this.originalObserver.onFrameCaptured(effectedVideoFrame);
        } else {
            this.originalObserver.onFrameCaptured(frame);
        }
    }
}

 videoCapturer.initialize(videoCapturerSurfaceTextureHelper, context, observerProxy);
Copy the code

The above is to increase the beauty function of WebRTC realization ~