Witness comparison can be seen everywhere in today’s society, such as high-speed rail, plane, hotel check-in, and even the entrance of scenic spots can see a variety of witness applications, face recognition SDK has also mushroomed in endlessly, such as baidu, shangtang, Face++, hongsoft and so on. After trying to use various SDK, THE ONE THAT I like most is the SDK of Hongsoft Technology. One of the most direct reasons is that Hongsoft promises to be free forever. I have been using ArcFace3.0 since version 2.0, and the measured effect is really good. Last month, I received a message that ArcFace3.0 was updated. As a white piao, I will not miss this update.

  • Feature comparison supports model selection, including life photo comparison model and witness comparison model
  • New 64-bit SDK for Android platform
  • A new image data input method is added

This paper will introduce the use of witness scenario of ArcFace3.0 according to the following points

  • ArcFace 3.0 SDK interface changes gain and loss
  • How to change renzheng 2.0 Demo program into one equipped with ArcFace 3.0 SDK
  • How to modify ArcFace 3.0 demo directly to witness procedures

1. Gains and losses of ArcFace 3.0 SDK interface changes

Advantages of interface change:

1. Greater service freedom

Take Witness 2.0 as an example, we can only incoming data, outgoing results, and some intermediate products, such as facial data characteristics can not be obtained. After ArcFace 3.0 is adopted, fixed processes are cancelled, and detection, comparison, extraction and other processes can be controlled by themselves.

2. The comparison of life photos and witness can be realized in the same project

There is a conflict between the witness SDK and ArcFace SDK, so it cannot be used at the same time. If we want to use both the witness and the life photo, we have to write two projects, and the process of the two projects is somewhat different. And now only need to select the interface model can realize the model switch, can realize the integration of identity and life photo program in a project.

3. Code reusability

In ArcFace 3.0, the only difference between identity card and ID card is the selection of the model in compare interface, which is completely the same in other aspects. Therefore, most of the code can be reused, greatly improving the efficiency of development.

Disadvantages of interface changes:

1. The interface is changed

Everything gains and loses, because ArcFace 3.0 does not have the encapsulation of the authentication part, so all interfaces need to be changed in the upgrade process, I believe that all programmers do not want to see the problem.

2. Implementation becomes difficult

Also, ArcFace 3.0 does not have the encapsulation of the witness part, so some processes and callbacks in the original interface need to be realized by themselves, which is not very friendly for those who are just getting started.

Summary:

Although the disadvantages of ArcFace 3.0 have been mentioned above, I am still in favor of this upgrade. After all, the innovation of each product always brings some impacts. However, compared with these impacts, I believe that the unification of interface and identification process improves the applicability of programs and the freedom of business. I believe that for Witness 2.0, the move of “killing a man” is worthwhile in the long run.

Ii. Witness 2.0 Demo integrates ArcFace 3.0 SDK

We have seen above that all interfaces of Witness 2.0 program need to be modified due to interface changes. Next, I will take Witness 2.0 Demo as an example to explain how I use ArcFace 3.0 SDK to upgrade.

1. Personal Certificate 2.0 Demo project configuration

Considering that some users may not be familiar with witness 2.0 Demo, let’s briefly introduce how to configure and use the official Demo.

Open platform

2.ArcFace 3.0 SDK replacement

First of all, we need to obtain the SDK of ArcFace3.0, which can also be obtained on the open platform. Replace the original SDK with a new SDK library, and the replaced project directory is shown in the figure below

3. Replace the ArcFace3.0 interface

As mentioned above, due to the overhaul of 3.0, all interfaces have been changed, so we need to replace all interfaces from 2.0 with 3.0.

3.1 Engine Activation:

There is no change in activation interface parameters

Witness: 2.0

IdCardVerifyManager.getInstance().active(Context context, String appId, String sdkKey);
Copy the code

ArcFace 3.0:

FaceEngine.active(Context context, String appId, String sdkKey);
Copy the code
3.2 Engine Initialization:

From the beginning of initialization, Witness 2.0 and ArcFace3.0 interface has a great difference, witness 2.0 has to monitor Id Card information and Camera information, while 3.0 cancelled this monitoring mechanism, the parameters of the interface are not introduced one by one, the official document is very detailed, you can refer to the official document.

Witness: 2.0

IdCardVerifyManager.getInstance().init(Context context, IdCardVerifyListener listener) 
Copy the code

ArcFace 3.0:

FaceEngine.init(Context context, DetectMode detectMode, DetectFaceOrientPriority detectFaceOrientPriority, int detectFaceScaleVal, int detectFaceMaxNum, int combinedMask)
Copy the code
3.3 Activating and Initializing Demo:

Here is my 2.0 replacement before and after the code, you can do a reference:

Witness: 2.0

  private void initEngine(a) {
        int result = IdCardVerifyManager.getInstance().init(this, idCardVerifyListener);
        LogUtils.dTag(TAG, "initResult: " + result);
        if (result == IdCardVerifyError.MERR_ASF_NOT_ACTIVATED) {
            Executors.newSingleThreadExecutor().execute(new Runnable() {
                @Override
                public void run(a) {
                    int activeResult = IdCardVerifyManager.getInstance().active(
                            MainActivity.this, APP_ID, SDK_KEY);
                    runOnUiThread(new Runnable() {
                        @Override
                        public void run(a) {
                            LogUtils.dTag(TAG, "activeResult: " + activeResult);
                            if (activeResult == IdCardVerifyError.OK) {
                                int initResult = IdCardVerifyManager.getInstance().init(
                                        MainActivity.this, idCardVerifyListener);
                                LogUtils.dTag(TAG, "initResult: " + initResult);
                                if(initResult ! = IdCardVerifyError.OK) { toast("Witness engine initialization failed, error code:"+ initResult); }}else {
                                toast("Witness engine activation failed, error code:"+ activeResult); }}}); }}); }else if(result ! = IdCardVerifyError.OK) { toast("Witness engine initialization failed, error code:"+ result); }}Copy the code

ArcFace 3.0:

 private void initEngine(a) {
        int result = faceEngine.init(this, DetectMode.ASF_DETECT_MODE_VIDEO, DetectFaceOrientPriority.ASF_OP_ALL_OUT, 16.1,
                FaceEngine.ASF_FACE_DETECT | FaceEngine.ASF_FACE_RECOGNITION);
        LogUtils.dTag(TAG, "initResult: " + result);
        if (result == ErrorInfo.MERR_ASF_NOT_ACTIVATED) {
            Executors.newSingleThreadExecutor().execute(() -> {
                int activeResult = FaceEngine.active(
                        MainActivity.this, Constants.APP_ID, Constants.SDK_KEY);
                runOnUiThread(() -> {
                    LogUtils.dTag(TAG, "activeResult: " + activeResult);
                    if (activeResult == ErrorInfo.MOK) {
                        int initResult = faceEngine.init(this, DetectMode.ASF_DETECT_MODE_VIDEO, DetectFaceOrientPriority.ASF_OP_ALL_OUT, 16.1,
                                FaceEngine.ASF_FACE_DETECT | FaceEngine.ASF_FACE_RECOGNITION);
                        LogUtils.dTag(TAG, "initResult: " + initResult);
                        if(initResult ! = ErrorInfo.MOK) { toast("Witness engine initialization failed, error code:", initResult)); }}else {
                        toast("Witness engine activation failed, error code:", activeResult)); }}); }); }else if(result ! = ErrorInfo.MOK) { toast("Witness engine initialization failed, error code:", result)); }}Copy the code
3.4 Identification and feature extraction of certificates and photos

In the photo part, we need to change the image processing method of the original 2.0 engine into the method of ArcSoftImageUtil in the 3.0 package. At the same time, the callback listening after the feature extraction is successfully deleted from the engine, so we need to write this callback by ourselves. Here I have stolen some time. A FaceListener can be used as a listener callback in faceHelper 2.0 and 3.0.

Witness: 2.0

    private void inputIdCard(a) {
        if (bmp == null) {
            return;
        }
        int width = bmp.getWidth();
        int height = bmp.getHeight();

        // Image clipping
        boolean needAdjust = false;
        while (width % 4! =0) {
            width--;
            needAdjust = true;
        }
        if (height % 2! =0) {
            height--;
            needAdjust = true;
        }
        if (needAdjust) {
            bmp = ImageUtils.imageCrop(bmp, new Rect(0.0, width, height));
        }
        // Convert to NV21 data format
        byte[] nv21Data = ImageUtils.getNV21(width, height, bmp);
        // enter the id image data
        DetectFaceResult result = IdCardVerifyManager.getInstance().inputIdCardData(
                nv21Data, width, height);
        LogUtils.dTag(TAG, "inputIdCardData result: " + result.getErrCode());
    }
Copy the code

ArcFace 3.0:

   private void inputIdCard(a) {
        if (bmp == null) {
            return;
        }
        // Image 4 byte aligned clipping
        bmp = ArcSoftImageUtil.getAlignedBitmap(bmp, true);
        int width = bmp.getWidth();
        int height = bmp.getHeight();
        // Convert to BGR format
        byte[] bgrData = ArcSoftImageUtil.createImageData(bmp.getWidth(), bmp.getHeight(), ArcSoftImageFormat.BGR24);
        int translateResult = ArcSoftImageUtil.bitmapToImageData(bmp, bgrData, ArcSoftImageFormat.BGR24);
        // The conversion succeeded
        if (translateResult == ArcSoftImageUtilError.CODE_SUCCESS) {
            List<FaceInfo> faceInfoList = new ArrayList<>();
            // The video mode is not suitable for static Image detection. Here we create a new idFaceEngine with the same parameters as faceEngine except that the detection mode is changed to Image
            int detectResult = idFaceEngine.detectFaces(bgrData, width, height, FaceEngine.CP_PAF_BGR24, faceInfoList);
            if (detectResult == ErrorInfo.MOK && faceInfoList.size() > 0) {
                // -2 is the trackID because the Camera and the id photo extraction share the faceHelper with the trackID to distinguish the data from which side
                faceHelper.requestFaceFeature(bgrData, faceInfoList.get(0), width, height, FaceEngine.CP_PAF_BGR24, -2); }}else {
            LogUtils.dTag(TAG, "translate Error result: "+ translateResult); }}Copy the code
3.5 Camera recognition and feature extraction

In fact, there is a feature extraction protection inside the onPreviewData interface of Renzhen 2.0, that is, the next feature extraction cannot be carried out before the last feature extraction is completed. However, there is no external encapsulation in 3.0, so we have to carry out the control of feature extraction by ourselves. The basic strategy is according to trackId, For each trackId, feature extraction will be performed only if the trackId is not extracted or fails to be extracted.

Witness: 2.0

   public void onPreview(byte[] nv21, Camera camera) {
                if(faceRectView ! =null) {
                    faceRectView.clearFaceInfo();
                }
                if (nv21 == null) {
                    return;
                }
                // Preview data incoming
                DetectFaceResult result = IdCardVerifyManager.getInstance().onPreviewData(nv21,
                        previewSize.width, previewSize.height, true);
                Rect rect = result.getFaceRect();

                if(faceRectView ! =null&& drawHelper ! =null&& rect ! =null) {
                    // Generate real-time face frame
                    drawHelper.draw(faceRectView, new DrawInfo(drawHelper.adjustRect(rect), "", Color.YELLOW)); }}Copy the code

ArcFace 3.0:

   public void onPreview(byte[] nv21, Camera camera) {
                if(faceRectView ! =null) {
                    faceRectView.clearFaceInfo();
                }
                if (nv21 == null) {
                    return;
                }
                List<FaceInfo> faceInfoList = new ArrayList<>();
                int ftResult = faceEngine.detectFaces(nv21, previewSize.width, previewSize.height, FaceEngine.CP_PAF_NV21, faceInfoList);
                // Only the largest face is valid in the scene, so take the first face directly. If there are other scenes, you can adjust yourself
                if (ftResult == ErrorInfo.MOK && faceInfoList.size() > 0) {
                    Rect rect = faceInfoList.get(0).getRect();
                    if(faceRectView ! =null&& drawHelper ! =null&& rect ! =null) {
                        drawHelper.draw(faceRectView, new DrawInfo(drawHelper.adjustRect(rect), "", Color.YELLOW));
                    }
                    // Wait until the ID data is ready, then start feature extraction of Camera data and prevent repeated extraction according to trackId
                    int trackId = faceInfoList.get(0).getFaceId();
                    if(isIdCardReady && requestFeatureStatusMap ! =null && requestFeatureStatusMap.containsKey(trackId)) {
                        // If a face fails to be extracted, retry
                        if (requestFeatureStatusMap.get(trackId) == null || requestFeatureStatusMap.get(trackId) == RequestFeatureStatus.FAILED) {
                            requestFeatureStatusMap.put(trackId, RequestFeatureStatus.SEARCHING);
                            faceHelper.requestFaceFeature(nv21, faceInfoList.get(0), previewSize.width, previewSize.height, FaceEngine.CP_PAF_NV21, faceInfoList.get(0).getFaceId()); }}}}Copy the code
3.6 Camera and idCard data callback

As mentioned above, the engine of Renzheng 2.0 has two interfaces for camera data idCard data, and two callback functions for processing the two data. However, in ArcFace3.0, not only the callback is cancelled, but also the camera data and idCard data share a detect and extractFaceFeature, so we can use trackId as a distinction, and because of the change of the engine, the engine no longer stores eigenvalues. As a result, we need to record the eigenvalues obtained from two data sources.

Witness: 2.0

 private IdCardVerifyListener idCardVerifyListener = new IdCardVerifyListener() {
        @Override
        public void onPreviewResult(DetectFaceResult detectFaceResult, byte[] bytes, int i, int i1) {
            runOnUiThread(() -> {
                // The preview face feature is extracted successfully
                if (detectFaceResult.getErrCode() == IdCardVerifyError.OK) {
                    isCurrentReady = true; compare(); }}); }@Override
        public void onIdCardResult(DetectFaceResult detectFaceResult, byte[] bytes, int i, int i1) {
            LogUtils.dTag(TAG, "onIdCardResult: " + detectFaceResult.getErrCode());
            runOnUiThread(() -> {
                // The face feature of id card is extracted successfully
                if (detectFaceResult.getErrCode() == IdCardVerifyError.OK) {
                    isIdCardReady = true;
                    restartHandler.removeCallbacks(restartRunnable);
                    readHandler.postDelayed(readRunnable, READ_DELAY);
                    ByteArrayOutputStream baos = new ByteArrayOutputStream();
                    bmp.compress(Bitmap.CompressFormat.PNG, 80, baos);
                    byte[] bmpBytes = baos.toByteArray();
                    Glide.with(MainActivity.this).load(bmpBytes).into(ivIdCard); compare(); }}); }};Copy the code

ArcFace 3.0:

 FaceListener faceListener = new FaceListener() {
            @Override
            public void onFail(Exception e) {}@Override
            public void onFaceFeatureInfoGet(@Nullable FaceFeature faceFeature, Integer requestId, Integer errorCode, long frTime, byte[] nv21) {
                // Feature extraction failed Set the comparison status to failed
                if(ErrorInfo.MOK ! = errorCode) { requestFeatureStatusMap.put(requestId, RequestFeatureStatus.FAILED);return;
                }
                // If requestId is -2, id data is obtained
                if (requestId == -2) {
                    isIdCardReady = true;
                    // The interface change feature cannot be stored in the engine, so it is stored in global variables
                    idFaceFeature = faceFeature;
                    restartHandler.removeCallbacks(restartRunnable);
                    readHandler.postDelayed(readRunnable, 5000);
                    ByteArrayOutputStream baos = new ByteArrayOutputStream();
                    bmp.compress(Bitmap.CompressFormat.PNG, 100, baos);
                    runOnUiThread(() -> {
                        Glide.with(MainActivity.this).load(bmp).into(ivIdCard);
                        compare();
                    });
                } else {
                    // The interface change feature cannot be stored in the engine, so it is stored in global variables
                    MainActivity.this.faceFeature = faceFeature;
                    isCurrentReady = true; runOnUiThread(() -> { compare(); }); }}};Copy the code
3.7 compare the interface

The comparison interface is much less changed than the previous one, just note that the comparison mode is changed to ID_CARD mode.

Witness: 2.0

 private void compare(a) {
        / /...
        // This interface is used to compare the characteristics of witnesses
        CompareResult compareResult = IdCardVerifyManager.getInstance().compareFeature(THRESHOLD);
        LogUtils.dTag(TAG, "compareResult: result " + compareResult.getResult() + ", isSuccess "
                + compareResult.isSuccess() + ", errCode " + compareResult.getErrCode());
        if (compareResult.isSuccess()) {
            playSound(R.raw.compare_success);
            ivCompareResult.setBackgroundResource(R.mipmap.compare_success);
            tvCompareTip.setText(name);
        } else {
            playSound(R.raw.compare_fail);
            ivCompareResult.setBackgroundResource(R.mipmap.compare_fail);
            tvCompareTip.setText(R.string.tip_retry);
        }
        / /...
    }
Copy the code

ArcFace 3.0:

   private void compare(a) {
        / /...
        // This interface is used to compare the characteristics of witnesses
        FaceSimilar compareResult = new FaceSimilar();
        faceEngine.compareFaceFeature(idFaceFeature, faceFeature, CompareModel.ID_CARD, compareResult);
        // The threshold for witness comparison is 0.82
        if (compareResult.getScore() > 0.82) {
            playSound(R.raw.compare_success);
            ivCompareResult.setBackgroundResource(R.mipmap.compare_success);
            tvCompareTip.setText(name);
        } else {
            playSound(R.raw.compare_fail);
            ivCompareResult.setBackgroundResource(R.mipmap.compare_fail);
            tvCompareTip.setText(R.string.tip_retry);
        }
        / /...
    }
Copy the code
3.8 Result Display

At this point, as long as the useless code is deleted from the witness 2.0 demo, we will successfully upgrade 2.0 to 3.0, let’s take a look at the unit after the success of the operation screenshot.

ArcFace 3.0 demo modified to witness procedures

Compared with upgrading witness 2.0 to ArcFace3.0, it is much easier to modify ArcFace3.0 directly. After all, it is not necessary to change all interfaces once. What we need to do is to increase the input of witness part, callback of witness part and logic of comparison. So I highly recommend getting started with ArcFace3.0, if not for some special reason it’s much faster than witness 2.0.

Modify interface selection

First of all, we should choose an Activity in the demo as we modify the template, I look at the RegisterAndRecognizeActivity is I think the most appropriate, because of its Camera than process has been completed, we need to do is two things:

  • Example Add the Id Card data input source

IdCard data input source We adopt the same way as demo to simulate IdCard information input, so we can completely apply the inputIdCard method.

 public void onClickIdCard(View view) {
        // Analog id card name, modifiable
        FileInputStream fis;
        // The id image data
        bmp = null;
        try {
            // Analog ID card image data source, modifiable
            fis = new FileInputStream(SAMPLE_FACE);
            bmp = BitmapFactory.decodeStream(fis);
            fis.close();
        } catch (Exception e) {
            e.printStackTrace();
        }
        inputIdCard();
    }

    private void inputIdCard(a) {
        if (bmp == null) {
            return;
        }
        // Image 4 byte aligned clipping
        bmp = ArcSoftImageUtil.getAlignedBitmap(bmp, true);
        int width = bmp.getWidth();
        int height = bmp.getHeight();
        // Convert to BGR format
        byte[] bgrData = ArcSoftImageUtil.createImageData(bmp.getWidth(), bmp.getHeight(), ArcSoftImageFormat.BGR24);
        int translateResult = ArcSoftImageUtil.bitmapToImageData(bmp, bgrData, ArcSoftImageFormat.BGR24);
        // The conversion succeeded
        if (translateResult == ArcSoftImageUtilError.CODE_SUCCESS) {
            List<FaceInfo> faceInfoList = new ArrayList<>();
            // The video mode is not suitable for static image detection. We chose frEngine as the engine for detection of id photos and added faceengine.asF_face_detect when initialization
            int detectResult = frEngine.detectFaces(bgrData, width, height, FaceEngine.CP_PAF_BGR24, faceInfoList);
            if (detectResult == ErrorInfo.MOK && faceInfoList.size() > 0) {
                // -2 is the trackID because the Camera and the id photo extraction share the faceHelper with the trackID to distinguish the data from which side
                faceHelper.requestFaceFeature(bgrData, faceInfoList.get(0), width, height, FaceEngine.CP_PAF_BGR24, -2); }}else {
            LogUtils.dTag(TAG, "translate Error result: "+ translateResult); }}Copy the code
  • Modify the base library for comparison

Since the person-witness ratio is 1:1 in most scenarios, this is adjusted in the onFaceFeatureInfoGet callback. First of all, through the trackID of -2 that we fosters in inputIdCard above, as a means of identifying id card data. Secondly, we need to record the id card feature to be compared with the face feature information under the camera. Here, we use the way of global variables to record. Finally, a status bit is used to record (of course, it can also determine whether the two features have data and maintain the data to synchronize the data on both sides) because the acquisition of the compared features will be sequentially distinguished. After the data on both sides are ready, the comparison can be carried out.

      public void onFaceFeatureInfoGet(@Nullable final FaceFeature faceFeature, final Integer requestId, final Integer errorCode) {
                / / FR a success
                if(faceFeature ! =null) {
                    // Receive id data
                    if (requestId == -2) {
                        isIdCardReady = true;
                        //feature is stored with global variables
                        idFaceFeature = faceFeature;
                        compare();
                        return;
                    }
// Log.i(TAG, "onPreview: fr end = " + System.currentTimeMillis() + " trackId = " + requestId);
                    Integer liveness = livenessMap.get(requestId);
                    // In the case of no biopsy, direct search
                    if(! livenessDetect) { isCurrentReady =true;
                        // Prevent multiple feature extraction of the same face
                        requestFeatureStatusMap.put(requestId, RequestFeatureStatus.SUCCEED);
                        compare();
// searchFace(faceFeature, requestId);
                    }
                    // Live detection passes, search for features
                    else if(liveness ! =null && liveness == LivenessInfo.ALIVE) {
                        isCurrentReady = true;
                        // Prevent multiple feature extraction of the same face
                        RegisterAndRecognizeActivity.this.faceFeature = faceFeature;
                        requestFeatureStatusMap.put(requestId, RequestFeatureStatus.SUCCEED);
                        compare();
// searchFace(faceFeature, requestId);
                    }
                    // If no result is found in vivo, or if it is not in vivo, execution of this function is delayed
                    else {
                        / /...}}// Feature extraction failed
                else {
                   / /...}}@Override
            public void onFaceLivenessInfoGet(@Nullable LivenessInfo livenessInfo, final Integer requestId, Integer errorCode) {
                / /...}};Copy the code
  • The compare function:
    private void compare(a) {
        if (isCurrentReady && isIdCardReady) {
            FaceSimilar similar = new FaceSimilar();
            int compareResult = frEngine.compareFaceFeature(idFaceFeature, faceFeature, CompareModel.ID_CARD, similar);
            if (compareResult == ErrorInfo.MOK && similar.getScore() > 0.82) {
                Log.i(TAG, "compare: success");
            } else {
                Log.i(TAG, "compare: fail");
            }
            // Reset the comparison status after the comparison is complete
            isIdCardReady = false;
            isCurrentReady = false;
            // If you still want to try the same face after comparison, allow its feature extractionrequestFeatureStatusMap.clear(); }}Copy the code
summary

Using ArcFace3.0 for modification, we can obviously feel that the modification is “slippery” a lot. Based on the original code, we only need to pay attention to the data input of Id Card and the logic before and after comparison. The difficulty of comparison can be almost ignored, and it is just a simple call interface. Here I also write relatively simple, some business logic such as: increase the id card data valid time; Stipulate the sequence of compulsory data of both parties; The display of the interface part is not done, just print the result of comparison. This article only provides ideas for your reference, business logic or need to add their own, finally give you a look at the completion of the modification to run the successful log comparison.

conclusion

Overall, rainbow soft witness SDK is very good. In terms of recognition effect, my ID card was taken about 7-8 years ago, and the current “fat if two people”, but the recognition accuracy is still very high. Forever free is really attractive in terms of development cost, so give it a thumbs up for what Hongsoft calls a conscience business. From secondary development interface point of view, in my contact with the SDK, I personally think that rainbow soft interface is very easy to use a batch, and rainbow soft interface document is very detailed, not only for each interface in detail, and a separate interface to invoke the demo, as well as the overall process demo for good the way new beginner stage. During the period also encountered some problems, hongsoft customer service staff are also very timely and accurate answers to my questions. Looking forward to the future development of Hongsoft, hope to bring us better products!

The appendix

1. Witness 2.0 Demo integrated ArcFace 3.0 SDK Demo address: github.com/1244975831/…

ArcFace 3.0 demo modified to witness program demo address: github.com/1244975831/…

If my demo is helpful to you, please remember to star my project.