Tencent cloud technology community – Jane book home page continues to present cloud computing technology articles, welcome your attention!


Author: Liu Xiaolong

preface

A lot of people think that face recognition is a very difficult job to achieve, see the name of the job scared, then feel uneasy to search online, see the online tutorial of N pages immediately give up. And that includes who I used to be. Face recognition isn’t that hard if you don’t have to understand how it works, just to make it work. Today we’re going to take a look at how to easily implement face recognition in less than 40 lines of code.

A little to distinguish

For most people, the distinction between face detection and face recognition isn’t a problem at all. But there are many online tutorials that intentionally or unintentionally refer to face detection as face recognition, misleading the public and causing some people to think the two are the same. In fact, face detection to solve the problem is to determine whether there is a face on a picture, and face recognition to solve the problem is who the face is. It can be said that face detection is the preliminary work of human recognition.

Today we’re going to do face recognition.

The tools used

  • Anaconda 2 —— Python 2
  • Dlib
  • scikit-image

Dlib

It’s worth saying a few words about the main tools we’re going to use today. Dlib is a cross-platform general purpose framework based on modern C++. The author is diligent and keeps up to date. Dlib covers machine learning, image processing, numerical algorithms, data compression, and more. More importantly, Dlib is well documented and full of examples. Like many libraries, Dlib provides a Python interface, which is easy to install with PIP:

pip install dlib
Copy the code

Scikit-image: scikit-image: scikit-image

pip install scikit-image
Copy the code
  • Note: If usedpip install dlibIf the installation fails, it will be more troublesome to install. Error prompt is very detailed, follow the error prompt step by step on the line.

Face recognition

The reason why Dlib is used for face recognition is because it already does most of the work for us, we just need to call it. Dlib has face detectors, trained face keypoint detectors, and trained face recognition models. Today our main goal is implementation, not principle. Interested students can go to the official website to view the source code and implementation of the reference.

Since today’s example is no more than 40 lines of code, it shouldn’t be too difficult. Difficult things are in the source code and papers.

Let’s first look at what we need to use today through the file tree:


You prepare the images of six candidates and place them in the candidate-faces folder, and then you need to recognize the face image test.jpg. Our job is to detect the face in test.jpg and determine who the candidate is.

The other girl-face-rec.py is our Python script. Shape_predictor_68_face_landmarks. Dat is a trained face keypoint detector. Dlib_face_recognition_resnet_model_v1.dat is a trained ResNet face recognition model. ResNet is the deep residual network proposed by He Keming at Microsoft and won the ImageNet 2015 champion. By enabling the network to learn from residual, ResNet is more powerful than CNN in depth and accuracy.

1. Prepare

Both shape_predictor_68_face_landmarks. Dat and dlib_face_recognition_resnet_model_v1.dat can be found here. If you can’t click the hyperlink, you can directly enter the following url: dlib.net/files/.

Then prepare pictures of several people’s faces as candidates’ faces, preferably the real one. Put it in the candidate- Faces folder.

Here are six images, as follows:


They are


Then prepare four images of faces to be recognized. In fact, one is enough. Here is just to see the different situations:


As you can see, the first two look quite different from the person in the candidate’s file, the third is the original image of the candidate, and the fourth is slightly sideways with a shadow on the right.

2. Identify the process

With the data ready, it’s time for the code. The general process of identification is as follows:

  • After face detection, key point extraction and descriptor generation, the candidate descriptor is saved.
  • Then, face detection, key point extraction and descriptor generation are carried out on the test face.
  • Finally, the Euclidean distance between the face descriptor of the test image and the candidate’s face descriptor is calculated, and the one with the smallest distance is judged to be the same person.

3. Code

The code will not be explained too much, because it is well commented. The following is a girl – face – rec. Py

# -*- coding: UTF-8 -*- import sys,os,dlib,glob,numpy from skimage import io if len(sys.argv) ! = 5: print "please check the parameters are correct" exit() # 1 Predictor_path = sys.argv[1] # 2. Face recognition model face_rec_model_path = sys.argv[2] # 3. Candidate face folder faces_folder_path = sys.argv[3] # 4. Face img_path = sys.argv[4] # 1 to be recognized. Load face detector = lib.get_frontal_face_detector() # 2. Sp = dlib.shape_predictor(predictor_path) # 3. Load the face recognition model facerec = lib.face_recognition_model_v1(face_rec_model_path) # win = lib.image_window() # candidate face descriptor sub-list Descriptors = [] # Do: # 1 for each face in the folder. For f in glob.glob(os.path.join(faces_folder_path, "*.jpg")): print("Processing file: {}".format(f)) img = io.imread(f) #win.clear_overlay() #win.set_image(img) # 1. Dets = Detector (img, 1) print("Number of faces detected: {}". Format (len(dets))) for K, d in enumerate(dets): # 2. # win.clear_overlay() # win.add_overlay(d) # win.add_overlay(shape) # 3 Descriptor extraction, 128D vector face_descriptor = facerec.compute_face_descriptor(img, Shape) # convert to numpy array v = numpy. Array (face_descriptor) descriptors. Append (v) # Img = IO. Imread (img_path) dets = detector(img, 1) dist = [] for k, d in enumerate(dets): shape = sp(img, d) face_descriptor = facerec.compute_face_descriptor(img, Shape d_test = numpy.array(face_descriptor) # Compute euclide distance for I in descriptors: Dist_ = numpy.linalg.norm(i-d_test) dist. Append (dist_) # list of candidates candidate = ['Unknown1','Unknown2','Shishi','Unknown4','Bingbing','Feifei'] # dict c_d = dict(zip(candidate,dist)) cd_sorted = sorted(c_d.iteritems(), key=lambda d:d[1]) print "\n The person is: ",cd_sorted[0][0] dlib.hit_enter_to_continue()Copy the code

4. Running result

We open the command line in the.py folder and run the following command

python girl-face-rec.py 1.dat 2.dat ./candidate-faecs test1.jpg

Since the names shape_predictor_68_face_landmarks. Dat and dlib_face_recognition_resnet_model_v1.dat are too long, I renamed them 1.dat and 2.dat.

The running results are as follows:

The person is Bingbing.Copy the code

For those of you with poor memory, check out test1.jpg. If you’re interested, try running all four test images.

It should be noted that the output of the first three graphs is very good. But the output of the fourth test image is candidate 4. It is easy to find the cause of confusion by comparing the two pictures.

After all, the machine is not human, and the intelligence of the machine needs human to improve.

Interested students can continue to research how to improve the accuracy of recognition. For example, take multiple candidate images for each person, and then compare the average distance to each person and so on. It’s all on your own.

Related to recommend

Laravel integrates universal image image management ability to create efficient image processing service


Has been authorized by the author tencent cloud community released, reproduced please indicate the article source The original link: www.qcloud.com/community/a… Get more Tencent mass technology practice dry goods, welcome to Tencent cloud technology community