What is Face Anonymization

With the popularization of face recognition technology, the privacy of face data has attracted more and more attention, and research on privacy protection has emerged one after another. At present, there are roughly the following three directions

  1. Tampering with images entered into facial recognition systems.
  2. Generative adversarial networks (GAN) to anonymize someone’s photo or video.
  3. Directly blur face recognition to the face

This paper mainly talks about the third point, how to use the mobile terminal face key point algorithm to achieve face anonymity function. This method of equipment requirements are low, the code is simple and easy to understand, modify can be directly landed.

The image below shows the final functionality

What are Face Landmarks?

Face key point detection is a key part of face correlation algorithm, it is the premise of face recognition, expression analysis, 3D face reconstruction, expression driven 3D animation and a series of face related problems.

We will use TengineKit to achieve face anonymity

TengineKit

Free mobile terminal real-time face 212 keypoints SDK. Is an easy to integrate face detection and face keypoint SDK. It can run on a variety of phones with very low latency.

Github.com/OAID/Tengin…

TengineKit rendering

implementation

Configuration Gradle

Build. Gradle in Project

repositories { ... mavenCentral() ... } allprojects { repositories { ... mavenCentral() ... }}Copy the code

Build. gradle added to main Module

    dependencies {
        ...
        implementation 'com. Tengine. Android: tenginekit: 1.0.3'. }Copy the code

Configuration Manifests

    <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
    <uses-permission android:name="android.permission.INTERNET"/>

    <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/>
    <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
    <uses-permission android:name="android.permission.READ_PHONE_STATE"/>

    <uses-permission android:name="android.permission.CAMERA"/>
    <uses-permission android:name="android.permission.FLASHLIGHT" />

    <uses-feature android:name = "android.hardware.camera" android:required="true"/>
    <uses-feature android:name = "android.hardware.camera.autofocus" />
Copy the code

Initialize the Android Camera

The steps to create a custom camera interface for the App are as follows:

  1. Detect and access the Camera
  2. Create preview TextureView
  3. Build preview TextureView layout
  4. Bind Camera to TextureView
  5. Start the preview

We start new a TextureView SurfaceTextureListener, complete camera inside the initial configuration, when available TextureView onSurfaceTextureAvailable the code will be called

   private final TextureView.SurfaceTextureListener surfaceTextureListener = new TextureView.SurfaceTextureListener() {
        @Override
        public void onSurfaceTextureAvailable(final SurfaceTexture texture, final int width, final int height) {
            int index = getCameraId();
            camera = Camera.open(index);

            try {
                Camera.Parameters parameters = camera.getParameters();
                List<String> focusModes = parameters.getSupportedFocusModes();
                if(focusModes ! =null && focusModes.contains(Camera.Parameters.FOCUS_MODE_CONTINUOUS_PICTURE)) {
                    parameters.setFocusMode(Camera.Parameters.FOCUS_MODE_CONTINUOUS_PICTURE);
                }
                List<Camera.Size> cameraSizes = parameters.getSupportedPreviewSizes();
                Size[] sizes = new Size[cameraSizes.size()];
                int i = 0;
                for (Camera.Size size : cameraSizes) {
                    sizes[i++] = newSize(size.width, size.height); } Size previewSize = CameraConnectionFragment.chooseOptimalSize(sizes, desiredSize.getWidth(), desiredSize.getHeight());  parameters.setPreviewSize(previewSize.getWidth(), previewSize.getHeight()); camera.setDisplayOrientation(90);
                camera.setParameters(parameters);
                camera.setPreviewTexture(texture);
            } catch (IOException exception) {
                camera.release();
            }

            camera.setPreviewCallbackWithBuffer(imageListener);
            Camera.Size s = camera.getParameters().getPreviewSize();
            camera.addCallbackBuffer(new byte[ImageUtils.getYUVByteSize(s.height, s.width)]);

            textureView.setAspectRatio(s.height, s.width);
            camera.startPreview();
        }

        @Override
        public void onSurfaceTextureSizeChanged(final SurfaceTexture texture, final int width, final int height) {}@Override
        public boolean onSurfaceTextureDestroyed(final SurfaceTexture texture) {
            return true;
        }

        @Override
        public void onSurfaceTextureUpdated(final SurfaceTexture texture) {}};Copy the code

Here we associate textureView with camera

textureView.setSurfaceTextureListener(surfaceTextureListener);
Copy the code

When camera starts preview, textureView gets the real size. We got the width and height of camera output video stream and preview textureView, and saved them for later use.

    textureView.setRealSizeListener(new AutoFitTextureView.RealSizeListener() {
        @Override
        public void onRealSizeMeasure(int w, int h) {
            if(! isReady){ isReady =true; Camera.Size s = camera.getParameters().getPreviewSize(); cameraReadyListener.onCameraReady( s.width, s.height,w, h ); }}});Copy the code

Processing video stream from Camera

First we initialize TengineKit:

  1. Camera processing mode is selected
  2. Open face detection and face key point functions
  3. Set the video stream format to YUV_NV21 (default Android Camera format)
  4. Set the width and height of the input video stream, here is the preview width and height of the camera
  5. Set the width and height of the output video stream. Here is the width and height of textrureView
  6. Set the input video stream from the front camera
    com.tenginekit.Face.init(getBaseContext(),
            AndroidConfig.create()
                    .setCameraMode()
                    .openFunc(AndroidConfig.Func.Detect)
                    .openFunc(AndroidConfig.Func.Landmark)
                    .setInputImageFormat(AndroidConfig.ImageFormat.YUV_NV21)
                    .setInputImageSize(previewWidth, previewHeight)
                    .setOutputImageSize(outputWidth, outputHeight)
    );
    com.tenginekit.Face.Camera.switchCamera(false);
Copy the code

Process the data

  1. Get the phone rotation Angle and set it to TengineKit
  2. Start detection. When the number of faces detected is greater than 0, call facedetect.landmark2d () to get the array of face key points
    int degree = CameraEngine.getInstance().getCameraOrientation(sensorEventUtil.orientation);

    com.tenginekit.Face.Camera.setRotation(degree - 90.false,
            outputWidth, outputHeight);

    com.tenginekit.Face.FaceDetect faceDetect = Face.detect(data);
    faceLandmarks = null;
    if(faceDetect.getFaceCount() > 0){
        faceLandmarks = faceDetect.landmark2d();
    }
Copy the code

Gaussian blur and draw

Here we use Android bitmap to achieve the function, which is rough and poor performance, but easy to understand, if readers are interested in using OpenGLES to achieve this function.

  1. Yuv data from the camera is converted to Bitmap by TengineKit’s image help function
  2. Through the outer frame of the key point of the face, cut the bitmap to get the bitmap array of the face
  3. Gaussian blur is applied to the bitmap of human face
    if(testBitmap ! =null){
        testBitmap.recycle();
    }
    testBitmap = Face.Image.convertCameraYUVData(
            data,
            previewWidth, previewHeight,
            outputWidth, outputHeight,
            - 90.true);


    for(Bitmap bitmap : testFaceBitmaps){
        bitmap.recycle();
    }
    testFaceBitmaps.clear();
    if(testBitmap ! =null && faceDetect.getFaceCount() > 0) {if(faceLandmarks ! =null) {for (int i = 0; i < faceLandmarks.size(); i++) {
                    Bitmap face = BitmapUtils.getDstArea(testBitmap, faceLandmarks.get(i).getBoundingBox());
                    face = BitmapUtils.blurByGauss(face, 50);
                    testFaceBitmaps.add(face);
            }
        }
    }

    runInBackground(new Runnable() {
        @Override
        public void run(a) { trackingOverlay.postInvalidate(); }});Copy the code

TrackingOverlay is a custom view that exposes the canvas for drawing bitmaps

    trackingOverlay.addCallback(new OverlayView.DrawCallback() {
        @Override
        public void drawCallback(final Canvas canvas) {
            if(testBitmap ! =null){
                canvas.drawBitmap(testBitmap, 0.0, circlePaint);
            }
            if(faceLandmarks ! =null) {for (int i = 0; i < faceLandmarks.size(); i++) { Rect r = faceLandmarks.get(i).getBoundingBox(); canvas.drawRect(r, circlePaint); canvas.drawBitmap(testFaceBitmaps.get(i), r.left, r.top, circlePaint); }}}});Copy the code

Effect of contrast

Demo

The source code

Github.com/jiangzhongb…

reference

Github.com/OAID/Tengin…

zhihu

zhuanlan.zhihu.com/p/161038093