This article will share how to use Python for face recognition — to detect and identify a person in real-time video.

In this deep learning project, we’ll learn how to use Python to recognize faces in live video. We will build this project using python Dlib’s facial recognition network. Dlib is a general-purpose software library. Using the DLIB toolkit, we can build real-world machine learning applications.

In this project, we will first learn how a face recognizer works, and then we will build face recognition in Python.

Face recognition using Python, OpenCV and deep learning

About dLIB face recognition:

Python provides the FACE_recognition API, which is built using dlib’s face recognition algorithm. The FACE_recognition API allows us to implement face detection, real-time face tracking, and face recognition applications.

Project Preparation:

First, you need to install the Dlib library and face_recognition API from PyPI:

pip3 install dlib 
pip3 install face_recognition
Copy the code

Download the source code:

Face recognition source code

Face recognition in Python

We will build this Python project in two parts. First, build two different Python files for these two parts:

  • **embedding. Py: ** In this step, we will take the human image as input. We’re going to embed faces into these images.
  • Recognize.py: Now we will recognize that particular person from the camera frame.

1. embedding.py:

First, create a file embedding. Py in your working directory. In this file, we will create a face embed for a specific face. Face_recognition. Face_encodings method is used to make face embedding. These face inserts are a 128-dimensional vector. In this vector space, different vectors of the same person’s image are close to each other. After embedding faces, we store them in a pickle file.

Paste the following code into this contex.py file.

  • Import related libraries:

    import sys import cv2 import face_recognition import pickle

  • To identify the person in the pickle file, use their name and unique ID as input:

    name=input(“enter name”) ref_id=input(“enter id”)

  • Create a pickle file and dictionary to store face codes:

    try: f=open(“ref_name.pkl”,”rb”)

    ref_dictt=pickle.load(f)
    f.close()
    Copy the code

    except: ref_dictt={} ref_dictt[ref_id]=name

    f=open(“ref_name.pkl”,”wb”) pickle.dump(ref_dictt,f) f.close()

    try: f=open(“ref_embed.pkl”,”rb”)

    embed_dictt=pickle.load(f)
    f.close()
    Copy the code

    except: embed_dictt={}

  • Open webcam and 5 photos of a person as input and create its embed:

Here, we store the embeddedness of a specific person in the embed_dictt dictionary. We’ve created embed_dictt in the previous state. In this dictionary, we will use that person’s ref_id as the key.

To take an image, press “S” five times. To stop the camera, press “Q” :

for i in range(5): key = cv2. waitKey(1) webcam = cv2.VideoCapture(0) while True: Check, frame = webcam.read() cv2.imshow("Capturing", frame) Small_frame = cv2.resize(frame, (0, 0), fx=0.25, Fy =0.25) rGB_small_frame = small_frame[:, :, ::-1] key = cv2.waitKey(1) if key == ord('s') : face_locations = face_recognition.face_locations(rgb_small_frame) if face_locations ! = []: face_encoding = face_recognition.face_encodings(frame)[0] if ref_id in embed_dictt: embed_dictt[ref_id]+=[face_encoding] else: embed_dictt[ref_id]=[face_encoding] webcam.release() cv2.waitKey(1) cv2.destroyAllWindows() break elif key == ord('q'): print("Turning off camera.") webcam.release() print("Camera off.") print("Program ended.") cv2.destroyAllWindows() breakCopy the code
  • Update pickle files with face embedding.

Here, we store embed_dictt in the pickle file. So, to identify that person, we can load its embed directly from this file:

f=open("ref_embed.pkl","wb")
pickle.dump(embed_dictt,f)
f.close()
Copy the code
  • Now it’s time to execute the first part of the Python project.

Run the Python file and get five image inputs using the name and its ref_id:

python3 embedding.py
Copy the code

2. Recognition. Py:

Here we will again create the character embed from the camera frame. We then match the new embed with the one stored in the pickle file. A new embed of the same person will approach its embed in the vector space. Therefore, we will be able to identify the person.

Now create a new Python file that recognizes.py and paste the following code:

  • Import libraries:

    import face_recognition import cv2 import numpy as np import glob import pickle

  • To load stored pickle files:

    f=open(“ref_name.pkl”,”rb”) ref_dictt=pickle.load(f)

    f.close()

    f=open(“ref_embed.pkl”,”rb”) embed_dictt=pickle.load(f)

    f.close()

  • Create two lists, one for ref_id and one for each embed:

    known_face_encodings = []

    known_face_names = []

    for ref_id , embed_list in embed_dictt.items(): for my_embed in embed_list: known_face_encodings +=[my_embed] known_face_names += [ref_id]

  • Activate the webcam to identify the person:

    video_capture = cv2.VideoCapture(0)

    face_locations = [] face_encodings = [] face_names = [] process_this_frame = True

    while True :

    Ret, frame = video_capture.read() small_frame = cv2.resize(frame, (0, 0), fx=0.25, Fy =0.25) rGB_SMALL_frame = Small_frame [:, :, ::-1] if process_this_frame: face_locations = face_recognition.face_locations(rgb_small_frame) face_encodings = face_recognition.face_encodings(rgb_small_frame, face_locations) face_names = [] for face_encoding in face_encodings: matches = face_recognition.compare_faces(known_face_encodings, face_encoding) name = "Unknown" face_distances = face_recognition.face_distance(known_face_encodings, face_encoding) best_match_index = np.argmin(face_distances) if matches[best_match_index]: name = known_face_names[best_match_index] face_names.append(name) process_this_frame = not process_this_frame for (top_s, right, bottom, left), name in zip(face_locations, face_names): top_s *= 4 right *= 4 bottom *= 4 left *= 4 cv2.rectangle(frame, (left, top_s), (right, bottom), (0, 0, 255), 2) cv2.rectangle(frame, (left, bottom - 35), (right, bottom), (0, 0, 255), PutText (frame, ref_dictt[name], (left + 6, bottom-6), font, 1.0, (255, 255, 255), 1) font = cv2.FONT_HERSHEY_DUPLEX cv2.imshow('Video', frame) if cv2.waitKey(1) & 0xFF == ord('q'): breakCopy the code

    video_capture.release() cv2.destroyAllWindows()

Now run the second part of the project to identify this person:

python3 recognise.py
Copy the code

Summary:

This deep learning project will teach you how to develop face recognition projects using the Python library Dlib and the Face_recognition APIs (OpenCV). It introduces the FACE_recognition API. We implemented the Python project in two parts:

  • In the first part, we saw how to store information about the structure of a face, known as face embedding. Then we learn how to store these inserts.
  • In Part 2, we have seen how to identify people by comparing new face emplacements with stored face emplacements.

Application of face recognition technology

At present, the application of face recognition technology in China is mainly concentrated in three areas: attendance access control, security and finance. Specific such as: security monitoring, video face detection, face recognition, traffic statistics, etc., widely used in the community, building intelligent access control, perimeter suspicious wandering detection, scenic spot traffic statistics and so on.

TSINGSEE black rhino video based on many years of experience in the field of video technology, fused AI detection, intelligent recognition technology to the various application scenarios, typical examples such as EasyCVR video fusion cloud services, with AI face recognition, license plate recognition, speech, sound and light alarm, monitoring video intercom, yuntai control the ability of data analysis and summary.