Git address: github.com/chenlinzhon…

This article mainly introduced the system involves the detailed method of face detection and recognition, the system is based on python2.7.10 opencv2 / tensorflow1.7.0 environment, implements the read the video from a camera, detect human face, the function of recognize faces Because the model file is too large, cannot upload the git, The whole project on baidu cloud disk, address: pan.baidu.com/s/1TaalpwQw…

Face recognition is a hot topic in computer vision. At present, in the laboratory environment, many face recognition has caught up with (exceeded) manual recognition accuracy (accuracy: 0.9427~0.9920), such as face++,DeepID3, FaceNet, etc. (for details, please refer to: face recognition technology review based on deep learning). However, due to light, Angle, expression, age and other factors, face recognition technology can not be widely used in real life. This paper, based on Python/OpencV/TensorFlow environment, uses FaceNet (LFW: 0.9963) as the basis to build a real-time face detection and recognition system, and explores the difficulties in the practical application of face recognition system. The main contents are as follows:

  1. Use the HTM5 Video tag to open the camera to capture the avatar and use the jquery.faceDeaction component to roughly detect the face
  2. The face image is uploaded to the server, and the face is detected by MTCNN
  3. Opencv affine transform is used to align the face and save the face after alignment
  4. Using pre-trained Facenet to detect the face embedding, embedding into 512 dimension features;
  5. The embedding feature creates efficient dirty index for face detection

Face collect

Using HTML5 video tag, it is convenient to read video frames from the camera. The following code realizes the function of reading video frames from the camera, and the faceDection intercepts images after face recognition and uploads them to the server. Add video and canvas tags in HTML files

<div class="booth">
    <video id="video" width="400" height="300" muted class="abs" ></video>
    <canvas id="canvas" width="400" height="300"></canvas>
  </div>
Copy the code

Turn on the Webcam

var video = document.getElementById('video'), var vendorUrl = window.URL || window.webkitURL; / / media object navigator. GetMedia = the navigator. GetUserMedia | | navagator. WebkitGetUserMedia | | the navigator. MozGetUserMedia | | navigator.msGetUserMedia; Navigator.getmedia ({video: true, // Use camera object audio: False / / not applicable audio}, function (strem) {video. SRC = vendorUrl. CreateObjectURL (strem); video.play(); });Copy the code

Face detection using jquery facetDection component

$(‘#canvas’).faceDetection()

Detection of human face screenshots, and the picture into base64 format, easy to upload

context.drawImage(video, 0, 0, video.width, video.height);
var base64 = canvas.toDataURL('images/png');
Copy the code

Upload base64 images to the server

Face / / upload pictures to the function the upload (base64) {$. Ajax ({" type ":" POST ", "url" : "/ upload. PHP", "data" : {' img: base64}. 'dataType':'json', beforeSend:function(){}, success:function(result){ console.log(result) img_path = result.data.file_path } }); }Copy the code

Image server accept code, PHP language implementation

Function base64_image_content($base64_image_content,$path){if (preg_match('/^(data:\s*image /(\w+); base64,)/', $base64_image_content, $result)){ $type = $result[2]; $new_file = $path."/"; if(! File_exists ($new_file)){// Check if the folder exists, create it if not, and give the highest permission mkdir($new_file, 0700,true); } $new_file = $new_file.time().".{$type}"; if (file_put_contents($new_file, base64_decode(str_replace($result[1], '', $base64_image_content)))){ return $new_file; }else{ return false; } }else{ return false; }}Copy the code

Face detection

There are many face detection methods, such as OpencV own face Haar feature classifier and Dlib face detection methods. For OpencV face detection method, a little is simple, fast; The problem is that face detection is not good. The method can detect faces with good front/vertical/light, but not faces with bad side/skew/light. Therefore, this method is not suitable for field application. For dlib face detection method, the effect is better than opencV method, but the detection intensity is difficult to meet the field application standard. In this paper, we use deep learning based MTCNN face detection system (MTCNN: Joint Face Detection and Alignment using multi-task Cascaded Convolutional Neural Networks). MTCNN face detection method is more robust to light, Angle and facial expression changes in the natural environment, and the face detection effect is better. At the same time, the memory consumption is not large, can realize real-time face detection. The use of MTCNN in this article is based on a python and tensorflow implementation (code from davidsandberg, caffe implementation code see kpzhang93)

model= os.path.abspath(face_comm.get_conf('mtcnn','model')) class Detect: def __init__(self): self.detector = MtcnnDetector(model_folder=model, ctx=mx.cpu(0), num_worker=4, accurate_landmark=False) def detect_face(self,image): img = cv2.imread(image) results =self.detector.detect_face(img) boxes=[] key_points = [] if results is not None: Points =results[1] for I in results[0]: faceKeyPoint = [] for i in range(5): faceKeyPoint.append([p[i], p[i + 5]]) key_points.append(faceKeyPoint) return {"boxes":boxes,"face_key_point":key_points}Copy the code

See fcce_detect.py for the code

Face alignment

Sometimes the face we capture may be skewed, in order to improve the quality of detection, we need to correct the face to the same standard position, which is defined by us, assuming that the standard detection head we set is like this

Assume that the coordinates of the eye and nose are a(10,30) b(20,30) c(15,45). For details, see the config.ini file alignment item

Opencv affine transformation is used to align the affine transformation matrix

Dst_point = [a,b, C] tranform = cv2.getAffineTransform(source_point, dst_point)Copy the code

Affine transformation:

img_new = cv2.warpAffine(img, tranform, imagesize)

Refer to the face_align.py file for details

Have characteristics

The aligned avatars are added to the pre-trained Facenet for embedding the detected faces into 512 dimension features and stored in the LMDB file in the form of (ID,vector)

 facenet.load_model(facenet_model_checkpoint)
 images_placeholder = tf.get_default_graph().get_tensor_by_name("input:0")
 embeddings = tf.get_default_graph().get_tensor_by_name("embeddings:0")
 phase_train_placeholder = tf.get_default_graph().get_tensor_by_name("phase_train:0")

 face=self.dectection.find_faces(image)
 prewhiten_face = facenet.prewhiten(face.image)
 # Run forward pass to calculate embeddings
 feed_dict = {images_placeholder: [prewhiten_face], phase_train_placeholder: False}
 return self.sess.run(embeddings, feed_dict=feed_dict)[0]
Copy the code

See face_encoder. Py for the code

Face feature index:

Face recognition can’t compare each and every face, it’s too slow, the same people get the feature index is more similar, can use KNN classification algorithm to identify, here using is more efficient ill omen algorithm to create index of face features, the index algorithm has an assumption is, Each face feature can be seen as a point in high dimensional space, if the two are very close (acquaintance), any hyperplane can not separate them, that is to say, if the point of space is very close, with hyperplane to separate, similar points will be divided in the same plane space (see :github.com/spotify/ann…

Lmdb_file = self.lmdb_file if os.path.isdir(lmdb_file); lmdb_file = self.lmdb_file if os.path. evn = lmdb.open(lmdb_file) wfp = evn.begin() annoy = AnnoyIndex(self.f) for key, value in wfp.cursor(): key = int(key) value = face_comm.str_to_embed(value) annoy.add_item(key,value) annoy.build(self.num_trees) annoy.save(self.annoy_index_path)Copy the code

See face_pierced. Py for the code

Face recognition

After the above three steps, get the face features, query the most recent points in the index and press the Euclidean distance, if the distance is less than 0.6 (more according to the actual situation set the threshold value) is considered to be the same person, and then according to the ID in the database to find the corresponding person’s information

# Find similar faces based on facial features
def query_vector(self,face_vector):
    n=int(face_comm.get_conf('annoy'.'num_nn_nearst'))
    return self.annoy.get_nns_by_vector(face_vector,n,include_distances=True)
Copy the code

See face_pierced. Py for the code

Install the deployment

The system consists of two modules:

  • Face web: provide user registration and login, face collection, PHP language implementation
  • Face_server: provides face detection, cropping, alignment, recognition functions, Python language implementation

Modules communicate with each other in socket mode. The communication format is length+ Content

The face_server configuration is in the config.ini file

1. Use an image

  • Face_serverdocker image: shareclz/python2.7.10-face-image
  • Face_web image: skiychan/nginx-php7

Assume that the project path is /data1/face-login

2. Install face_server container

Docker run it --name=face_server --net=host -v /data1:/data1 shareclz/python2.7.10-face-image /bin/bashcd /data1/face-login
python face_server.py
Copy the code

3. Install the face_Web container

docker run -it --name=face_web --net=host  -v /data1:/data1  skiychan/nginx-php7 /bin/bash
cd/data1/face-login; PHP -s 0.0.0.0:9988 -t./web/Copy the code

End result:

After loading MTCNN and Facenet models, face_server waits for the face request

Unregistered identification failed

Face registered

The login is successful

reference

zhuanlan.zhihu.com/p/25025596

Github.com/spotify/ann…

Blog.csdn.net/just_sort/a…

Blog.csdn.net/oTengYue/ar…