preface

Recently, the company’s project is relatively empty, so I spent some time to write a personal face recognition app, which can check your gender, age, appearance level, mood and other information, using Face++ face recognition API. MVP architecture is adopted in this project. Retrofit, RxJava, Dagger, EventBus and other frameworks are used for development and decoupling. MaterialDesign is used for layout design on UI. The main function is to take photos, and then send the photos to the Face++ server, face recognition, get the returned information, processing the information. Mark faces in photos and display the information. Without further words, let’s take a look at the effect of the app (Daniel Wu is still handsome, haha).


Face recognition main interface

Facial recognition details interface




Multi-face recognition



reggie1996 – FaceDetect

process

The whole process of the project is simple in three steps: take photos, upload photos, obtain data, and then process and display the data.

Take a picture

Taking photos requires obtaining system permissions. I have encapsulated a method to determine whether the App has permissions related to taking photos. If not, it will dynamically request permissions and return false; if so, it will return true.

public static boolean checkAndRequestPermission(Context context, int requestCode) { if (context.checkSelfPermission( Manifest.permission.WRITE_EXTERNAL_STORAGE) ! = PackageManager.PERMISSION_GRANTED || context.checkSelfPermission(Manifest.permission.READ_EXTERNAL_STORAGE) ! = PackageManager.PERMISSION_GRANTED || context.checkSelfPermission(Manifest.permission.CAMERA) ! = PackageManager.PERMISSION_GRANTED) { ((Activity) context).requestPermissions(new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE, Manifest.permission.READ_EXTERNAL_STORAGE, Manifest.permission.CAMERA}, requestCode); return false; }else { return true; }}Copy the code

After obtaining the permission to take photos, you can take photos, but the photos obtained by taking photos need to be obtained through the FileProvider. I won’t cover FileProvider, but it will be used after Android 7.0.

        <provider
            android:name="android.support.v4.content.FileProvider"
            android:authorities="com.chaochaowu.facedetect.provider"
            android:exported="false"
            android:grantUriPermissions="true">
            <meta-data
                android:name="android.support.FILE_PROVIDER_PATHS"
                android:resource="@xml/file_paths" />
        </provider>
Copy the code

After taking the photo and reading it from the file, we can get a BitMap object. There is a big pit here, if the phone is Samsung, the photo read from the file will be rotated 90°!! It took me a long time to fix this trap, thinking it was the fault of my mobile phone. Later, I checked online and consulted my predecessors. It turned out that all Samsung mobile phones had this problem, so we had to process the photos taken out of the file.

Public static int getBitmapDegree(String path) {public static int getBitmapDegree(String path) {public static int getBitmapDegree(String path) {public static int getBitmapDegree(String path) {public static int getBitmapDegree = 0; ExifInterface ExifInterface = new ExifInterface(path); ExifInterface = new ExifInterface(path); / / get the rotation of the image information int orientation. = exifInterface getAttributeInt (exifInterface TAG_ORIENTATION, ExifInterface.ORIENTATION_NORMAL); switch (orientation) { case ExifInterface.ORIENTATION_ROTATE_90: degree = 90; break; case ExifInterface.ORIENTATION_ROTATE_180: degree = 180; break; case ExifInterface.ORIENTATION_ROTATE_270: degree = 270; break; default: degree = 0; break; } } catch (IOException e) { e.printStackTrace(); } return degree; } /** * public static Bitmap ** @param degree * @param degree */ public static Bitmap rotateBitmapByDegree(Bitmap bm, int degree) { Bitmap returnBm = null; // Generate rotation Matrix Matrix Matrix = new Matrix(); matrix.postRotate(degree); ReturnBm = bitmap.createbitmap (bm, 0, 0, bm.getwidth (), bm.getheight (), matrix, true); } catch (OutOfMemoryError | Exception e) { e.printStackTrace(); } if (returnBm == null) { returnBm = bm; } if (bm ! = returnBm) { bm.recycle(); } return returnBm; }Copy the code

Encapsulate two methods, call in turn can solve the Samsung phone photo problem. The main job of the two methods is to get the Angle by which the image is rotated, and then rotate the Angle back to get the original image. Since not all phones rotate photos when they get them, you need to determine if the photos have been rotated before deciding if you need to rotate them. Ok, so you finally have the correct BitMap photo and can proceed to the next step.

Upload photos to get data

Upload photos to retrieve data, mainly using Retrofit and RxJava encapsulation. The parameters for the request can be found in the official documentation of Face++.

/** * Retrofit * @author Chaochaowu */ public Interface FaceppService {/** * @param apiKey * @param apiSecret * @param imageBase64 * @param returnLandmark * @param returnAttributes * @return */ @POST("facepp/v3/detect") @FormUrlEncoded Observable<FaceppBean> getFaceInfo(@Field("api_key") String apikey, @Field("api_secret") String apiSecret, @Field("image_base64") String imageBase64, @Field("return_landmark") int returnLandmark, @Field("return_attributes") String returnAttributes); }Copy the code

Photos need to be uploaded to the server after base64 transcoding, which encapsulates a photo Base64 transcoding method.

 public static String base64(Bitmap bitmap){
        ByteArrayOutputStream baos = new ByteArrayOutputStream();
        bitmap.compress(Bitmap.CompressFormat.JPEG, 100, baos);
        byte[] bytes = baos.toByteArray();
        return Base64.encodeToString(bytes, Base64.DEFAULT);
    }
Copy the code

After the processing is complete, a network request can be made to retrieve the data.

@Override public void getDetectResultFromServer(final Bitmap photo) { String s = Utils.base64(photo); faceppService.getFaceInfo(BuildConfig.API_KEY, BuildConfig.API_SECRET, s, 1, "gender,age,smiling,emotion,beauty") .subscribeOn(Schedulers.io()) .observeOn(AndroidSchedulers.mainThread()) .subscribe(new Observer<FaceppBean>() { @Override public void onSubscribe(Disposable d) { mView.showProgress(); } @Override public void onNext(FaceppBean faceppBean) { handleDetectResult(photo,faceppBean); } @Override public void onError(Throwable e) { mView.hideProgress(); } @Override public void onComplete() { mView.hideProgress(); }}); }Copy the code

The Face++ server will process the photo we upload, analyze the face information in the photo, and return it as json. The returned data will be put into the bean class we define.

@author Chaochaowu */ public class FaceppBean {/** * image_id: Dd2xUw9S/7yjr0oDHHSL/Q== * request_id : 1470472868,dacf2ff1-ea45-4842-9c07-6e8418cea78b * time_used : 752 * faces : [{"landmark":{"mouth_upper_lip_left_contour2":{"y":185,"x":146},"contour_chin":{"y":231,"x":137},"right_eye_pupil":{"y": 146,"x":205},"mouth_upper_lip_bottom":{"y":195,"x":159}},"attributes":{"gender":{"value":"Female"},"age":{"value":21},"g Lass ": {" value" : "None"}, "headpose" : {" yaw_angle ": 26.625063," pitch_angle ": 12.921974," roll_angle ": 22.814377}," smile ": {" thre Shold: 30.1 ", "value" : 2.566890001296997}}, "face_rectangle" : {" width ": 140," top ": 89, the" left ": 104," height ": 141}," face_token ":" Ed 319e807e039ae669a4d1af0922a0c8"}] */ private String image_id; private String request_id; private int time_used; private List<FacesBean> faces; . Show some contentCopy the code

The bean class contains information such as gender, age, appearance level and emotion obtained by face recognition, as well as the coordinate position of each face in the photo. The next step is to process the data.

Data processing after obtaining information

The data processing is mainly about two things, one is to show the data in the form of text, which is very simple and will not be introduced, and another is to mark the face in the photo, which needs to process the BitMap, using the coordinate position of the face in the data in the photo, we use the box to mark the face.

private Bitmap markFacesInThePhoto(Bitmap bitmap, List<FaceppBean.FacesBean> faces) {
        Bitmap tempBitmap = bitmap.copy(Bitmap.Config.ARGB_8888, true);
        Canvas canvas = new Canvas(tempBitmap);
        Paint paint = new Paint();
        paint.setColor(Color.RED);
        paint.setStyle(Paint.Style.STROKE);
        paint.setStrokeWidth(10);

        for (FaceppBean.FacesBean face : faces) {
            FaceppBean.FacesBean.FaceRectangleBean faceRectangle = face.getFace_rectangle();
            int top = faceRectangle.getTop();
            int left = faceRectangle.getLeft();
            int height = faceRectangle.getHeight();
            int width = faceRectangle.getWidth();
            canvas.drawRect(left, top, left + width, top + height, paint);
        }
        return tempBitmap;
    }
Copy the code

Encapsulate a method, using Canvas to draw on the photo, because there may be more than one face in the photo, so use the for loop to traverse. Obtain the coordinates of the face in the photo, using the coordinates of the upper left corner of the face and the width and height of the face, draw a box in the photo to mark the face.







I used RecyclerView to display the rest information. Swipe left and right to see information about each face. Brief information is displayed on the Item of RecyclerView. You can click item to enter the details page to view the detailed information of facial recognition. RecyclerView and details of the interface is not introduced, very basic operation. I’m just going to use SharedElement to make it look comfortable. See the code on Github for details.




There is nothing else to do, and you can take a look at my project architecture. Due to the use of various frameworks for decoupling, the number of code files has increased, but the code in a single file will be less clear and easier to read, which is also the purpose of decoupling, and easier to maintain later.







See the github code for more details

The last

After writing this APP, I have been thinking about a question, the APP scores Daniel Wu’s appearance level more than 80, then what is the appearance level of 100 points? Interested friends can download the code to play, test yourself or a friend’s appearance level, hey hey. Github address: Reggie1996 – FaceDetect Finally wish everyone a happy life ~