Resources: matthewearl. Making. IO / 2015/07/28 /…

Translation: Little Marco

Editor: Captain


Remember? Last winter, deepfakes appeared on Reddit to replace human faces with a neural network, allowing Hollywood actresses to play porn.

The project spawned FakeAPP, a desktop app that lets stars like Nicolas Cage star in any movie they want, with anyone’s face, of course. We’ve shared these projects in detail:

The effect! Someone made fake AV with AI technology!

AI has decided that ta will be the best actor in every future Oscar.

So, are you surprised by the face-changing effect? Even without neural networks, we can use Python and some Python libraries to replace faces in still images, but this is enough to show Python’s “mystical power.”


Here’s how to do a Python face change.

In this article, we’ll show you how to use a short Python script (about 200 lines) to automatically replace facial features in one image with facial features in another image. That is, to achieve the following effect:

The specific process is divided into four steps:

  • Detection of facial markers;
  • Rotate, scale, and pan Figure 2 to fit Figure 1;
  • Adjust the white balance of Figure 2 to match Figure 1;
  • Fuse the features of Figure 2 into Figure 1;

The complete code address for this script is at the bottom.

Extracting facial markers using dLIB

This script uses Dlib Python Bindings to extract facial bindings:

Dlib implements Vahid Kazemi and Josephine Sullivan’s paper One Millisecond Face Alignment with an Ensemble of Regression Tree algorithm described in the article. The algorithm itself is quite complex, but implementing it through the dlib interface is quite simple:

PREDICTOR_PATH = "/ home/matt/dlib - 18.16 / shape_predictor_68_face_landmarks dat"

detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor(PREDICTOR_PATH)

def get_landmarks(im):
   rects = detector(im, 1)

   if len(rects) > 1:
       raise TooManyFaces
   if len(rects) == 0:
       raise NoFaces

return numpy.matrix([[p.x, p.y] for p in predictor(im, rects[0]).parts()])
Copy the code

The get_landmarks() function receives the image as a NUMpy array and returns a 68×2 matrix of elements. Each row of the matrix corresponds to the x and y coordinates of a particular feature point in the input image.

The feature extractor requires an approximate bounding box as input to the algorithm. This will be provided by a traditional face detector. The face detector returns a list of rectangles, each corresponding to a face in the image.

Generating Predictor requires pre-trained models. The model is available for download at dlib SourceForge Repository.

Download portal

Face alignment is achieved by Procrustes Analysis

We now have two face marker matrices, where each row contains the coordinates of a facial feature (for example, line 30 gives the coordinates of the nose tip). We now just have to figure out how to rotate, translate, and scale all the points in the first vector so that they match as closely as possible the points in the second vector. Similarly, the same transformation can be used to superimpose the second graph on the first.

To make this more mathematical, let’s set T, S, and R and minimize the following equation:

Where, R is a 2×2 orthogonal matrix, S is a scalar, T is a two-dimensional vector, PI and QI are the row and column indices of the previously calculated facial marker matrix.

It turns out that Ordinary Procrustes Analysis can solve these problems:

def transformation_from_points(points1, points2):
   points1 = points1.astype(numpy.float64)
   points2 = points2.astype(numpy.float64)

   c1 = numpy.mean(points1, axis=0)
   c2 = numpy.mean(points2, axis=0)
   points1 -= c1
   points2 -= c2

   s1 = numpy.std(points1)
   s2 = numpy.std(points2)
   points1 /= s1
   points2 /= s2

   U, S, Vt = numpy.linalg.svd(points1.T * points2)
   R = (U * Vt).T

   return numpy.vstack([numpy.hstack(((s2 / s1) * R,
                                      c2.T - (s2 / s1) * R * c1.T)),
                        numpy.matrix([0., 0., 1.])])
Copy the code

Let’s walk through the code step by step:

1. Convert the input matrix to a floating point type. This is also a necessary condition for the following steps.

2. Subtract the center of moment from each set of points. Once an optimal scaling and rotation method has been found for the two new sets of points, the two moment centers, C1 and C2, can be used to find the complete solution.

3. Again, divide each set of points by its standard deviation. This eliminates the scaling bias.

4. Calculate the rotation part using Singular Value Decomposition. See the Wikipedia article on Orthogonal Procrustes Problem for details on how it works.

5. Return the whole transformation process as affine transformation matrix.

The result can then be inserted into OpenCV’s cv2.warpAffine function to map the second image to the first:

def warp_im(im, M, dshape):
   output_im = numpy.zeros(dshape, dtype=im.dtype)
   cv2.warpAffine(im,
                  M[:2],
                  (dshape[1], dshape[0]),
                  dst=output_im,
                  borderMode=cv2.BORDER_TRANSPARENT,
                  flags=cv2.WARP_INVERSE_MAP)
return output_im
Copy the code

Correct the color of the second image

If we try to overlay facial features directly at this point, we quickly discover a problem:

The different skin tones and lighting between the two images create discontinuities at the edges of the covered area. So we tried to fix it:

COLOUR_CORRECT_BLUR_FRAC = 0.6 LEFT_EYE_POINTS = list(range(42, 48)) RIGHT_EYE_POINTS = list(range(36, 36) 42)) def correct_colours(im1, im2, landmarks1): blur_amount = COLOUR_CORRECT_BLUR_FRAC * numpy.linalg.norm( numpy.mean(landmarks1[LEFT_EYE_POINTS], axis=0) - numpy.mean(landmarks1[RIGHT_EYE_POINTS], axis=0)) blur_amount = int(blur_amount)if blur_amount % 2 == 0:
       blur_amount += 1
   im1_blur = cv2.GaussianBlur(im1, (blur_amount, blur_amount), 0)
   im2_blur = cv2.GaussianBlur(im2, (blur_amount, blur_amount), 0)

   # Avoid divide-by-zero errors.Im2_blur += 128 * (im2_blur <= 1.0)return (im2.astype(numpy.float64) * im1_blur.astype(numpy.float64) /
                                               im2_blur.astype(numpy.float64))
Copy the code

What’s the effect now? Let’s see:

This function attempts to change the color of Figure 2 to match Figure 1 by dividing IM2 by the Gaussian blur of IM2, and then multiplying by the Gaussian blur of IM1. Here we use RGB scaling colour-correction, but instead of using the constant scale factors of the full image directly, we use local scale factors for each pixel.

This method can only correct the light difference between the two images to a certain extent. For example, if the light in figure 1 comes from one side, but the light in Figure 2 is very uniform, one side of Figure 2 will also appear darker after color correction.

That said, this is a fairly crude solution, and the key is a properly sized Gaussian kernel. If it is too small, the facial features of Figure 1 appear in Figure 2. If it is too large, the core will go beyond the pixelated face area and change color. The core here is 0.6 times the pupil distance.

Merge the features of Figure 2 into Figure 1

Use a mask to select the part of Figure 2 and Figure 1 that should be shown finally:

A value of 1 (white) is the region that Figure 2 should display, and a value of 0 (black) is the region that Figure 1 should display. The values between 0 and 1 are the mixed region of Figure 1 and Figure 2.

Here is the code that generates the above:

LEFT_EYE_POINTS = list(range(42, 48))
RIGHT_EYE_POINTS = list(range(36, 42))
LEFT_BROW_POINTS = list(range(22, 27))
RIGHT_BROW_POINTS = list(range(17, 22))
NOSE_POINTS = list(range(27, 35))
MOUTH_POINTS = list(range(48, 61))
OVERLAY_POINTS = [
   LEFT_EYE_POINTS + RIGHT_EYE_POINTS + LEFT_BROW_POINTS + RIGHT_BROW_POINTS,
   NOSE_POINTS + MOUTH_POINTS,
]
FEATHER_AMOUNT = 11

def draw_convex_hull(im, points, color):
   points = cv2.convexHull(points)
   cv2.fillConvexPoly(im, points, color=color)

def get_face_mask(im, landmarks):
   im = numpy.zeros(im.shape[:2], dtype=numpy.float64)

   for group inOVERLAY_POINTS: draw_convex_hull(im, landmarks[group], color=1) im = numpy.array([im, im, im]).transpose((1, 2, 0)) im = (cv2.gaussianblur (im, (FEATHER_AMOUNT, FEATHER_AMOUNT), 0) > 0) * 1.0im = cv2.gaussianblur (im, (FEATHER_AMOUNT, FEATHER_AMOUNT), 0)return im

mask = get_face_mask(im2, landmarks2)
warped_mask = warp_im(mask, M, im1.shape)
combined_mask = numpy.max([get_face_mask(im1, landmarks1), warped_mask],
                         axis=0)
Copy the code

Let’s break it down:

  • The normal get_face_mask() function is defined to generate a mask for an image and a flag matrix. The mask draws two white convex polygons: one around the eyes and one around the nose and mouth. After that, the edges of the mask feather out 11 pixels, which helps eliminate the remaining discontinuities.
  • Generate a face mask for Figure 1 and Figure 2. Using the transformation from Step 2, you can convert the mask in Figure 2 to the coordinate space in Figure 1.
  • The two masks are then combined by maximizing all elements. This is done to ensure that the features in Figure 1 are also covered and the features in Figure 2 are displayed.

Finally, apply the mask to the final image:

Output_im = IM1 * (1.0 - combined_mask) + warped_corrected_IM2 * combined_maskCopy the code

Ha, face change success!

Attached: This project code address: Github