Before I start this ARKit tutorial, let me quickly explain the different parts of the camera. The original sensor lens, like most iPhone/ iPad front-facing cameras, has a built-in microphone, a 7-megapixel camera, an Ambient light sensor, a proximity sensor, and speakers. The most unique thing about the projector is its unique dot projector, flood Illuminator, and infrared camera.

The Dot projector projects more than 30,000 invisible dots onto the user’s face to create a face map (mentioned later). The infrared camera reads the mapping point, captures an infrared image, and sends the data to the Apple A12 Bionic processor for comparison. Finally, floodlights will use infrared light for easy face recognition in dark environments.

Together, these different sensors can create magical experiences like Animojis and Memojis. With the protosensor lens, we can use 3D models of the user’s face and head to create more special effects for the App.

As a developer, it is particularly important to have a learning atmosphere and a communication circle. This is my iOS communication group: 1012951431. No matter you are a big bull or a small white, you are welcome to enter.

Teaching Example Project

I believe the most important thing for developers is to learn how to use a primordial sensor lens and use face tracking to create an amazing face recognition experience for users. In this teaching, I will explain how to use the ARFaceTrackingConfiguration ARKit framework, to go through 30000 points of surveying and mapping to identify different facial movements.

The end result will look like this:

Let’s get started!

You’ll need to run this project on iPhone X, XS, XR, or iPad Pro (3rd generation), as only these models support proto-sensing lenses. In this article, we will use Swift 5 and Xcode 10.2.

** Editor’s note: ** If you are not familiar with ARKit, please refer to our other ARKit teaching article.

Build an ARKit paradigm to track facial movements

First, open Xcode and create a new Xcode project. Under Templates, make sure you select the Augmented Reality App for iOS.

Next, name your project. I named the project True Depth. Please make sure you set Language to Swift and Content Technology to SceneKit.

Go to Main.storyboard, there should be a Single View, and the ARSCNView in the middle should already be attached to the code.

What we really need to do is very simple! All we need to do is add a UIView, and put a UILabel inside the View. This Label will tell the user what facial expression they are displaying.

Drag UIView inside ARSCNView, and then set the condition. Set Width to 240pt, Height to 120pt, then Left and Bottom to 20pt.

For aesthetic design, let’s set the Alpha of the View to 0.8. Now, pull the UILabel into the view you just finished, and set all the edges to 8pt.

Finally, set the Label alignment to the middle. Once set up, your storyboard should look something like this:

Now, let’s set up IBOutlets to connect to ViewController.swift. Switch to Assistant Editor mode, hold down the Control key and select UIView and UILabel, then drag them to ViewController.swift to set up IBOutlets.

You should be able to construct two outlets: faceLabel and labelView.

Create a Face Mesh

Since we chose the Augmented Reality App as the template, there is some unnecessary code in the template, so let’s clean up the code first. Change the viewDidLoad method to look like this:

override func viewDidLoad() {
    super.viewDidLoad()
    // 1
    labelView.layer.cornerRadius = 10
    sceneView.delegate = self
    sceneView.showsStatistics = true
    // 2
    guard ARFaceTrackingConfiguration.isSupported else {
        fatalError("Face tracking is not supported on this device")
    }
}
Copy the code

Following the original template, the code will read a 3D scene; But we don’t need the scene, so we delete it. We can then delete the art.scnAssets folder in the Project Navigator. Finally, we add two pieces of code to the viewDidLoad method:

  1. First of all, we’re going to putlabelViewSet the corners of the. It’s really just a design preference.
  2. Next, we need to confirm whether the model is supportedARFaceTrackingConfiguration. This is the AR tracking feature we use to create a face grid, and if we don’t confirm it, the program crashes; An error message will be displayed if you use a model that does not support Settings.

Then, we will change one line of code within the viewWillAppear method: change the constant configuration to ARFaceTrackingConfiguration (). Your code should look something like this:

And then, we need to add the ARSCNViewDelegate method. Add the following code to // MARK: -arscnViewDelegate:

func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? { let faceMesh = ARSCNFaceGeometry(device: sceneView.device!) let node = SCNNode(geometry: faceMesh) node.geometry? .firstMaterial? .fillMode = .lines return node }Copy the code

This code is executed when the ARSCNView is rendered. First, we create sceneView’s face graphics and set it to a constant faceMesh. We then assign this graphic information to SCNNode and set the material to Node. For 3D objects, the material is usually the color or texture of the 3D object.

To construct a face mesh, you can use two materials: fill Material or Lines Material. I prefer to use line material, so I set fillMode =.lines in the code, but you can choose. Your code should now look like this:

If you run the App, you should see something like this:

Update the face grid

You may notice that the face grid doesn’t update with your facial expressions, such as blinking, smiling, or yawning. This is because we also need to add a renderer(_didUpdate:) under renderer(_nodeFor).

func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
    if let faceAnchor = anchor as? ARFaceAnchor, let faceGeometry = node.geometry as? ARSCNFaceGeometry {
        faceGeometry.update(from: faceAnchor.geometry)
    }
}
Copy the code

This code is executed every time sceneView is updated. First, we define a faceAnchor as the anchor where the face is detected in sceneView. This anchor is the data of the posture, topology, and expression of the face detected when the face tracking AR session is executed. We also define a constant called faceGeometry, which is the topology of the detected face. Using these two constants, we can update faceGeometry every time.

Running the code again, the face grid now updates at 60 FPS every time you change the expression on your face.

Analyze facial expressions

First, create a variable at the top of the file:

var analysis = ""
Copy the code

Next, create the following function at the bottom of the file:

func expression(anchor: ARFaceAnchor) { // 1 let smileLeft = anchor.blendShapes[.mouthSmileLeft] let smileRight = anchor.blendShapes[.mouthSmileRight] let cheekPuff = anchor.blendShapes[.cheekPuff] let tongue = anchor.blendShapes[.tongueOut] self.analysis = "" // 2 if ((smileLeft? .decimalValue?? 0.0) + (smileRight?.decimalValue?? 0.0)) > 0.9 {self.analysis += "You are. "} if CheekPuff?.decimalValue?? 0.0 > 0.1 {self. Analysis += "Your cheeks are puffed. "} if tongue?.decimalvalue?? 0.0 > 0.1 {self. Analysis += "Don't stick your tongue out! "}}Copy the code

The above function takes an ARFaceAnchor as a function.

  1. blendShapesIs a dictionary of named coefficients that represent the facial expressions detected according to specific facial expression variations. Apple offers more than 50 different coefficients to detect different facial expressions. For our needs, we only use four:mouthSmileLeft,mouthSmileRight,cheekPuff, andtongueOut.
  2. We use coefficients to determine the probability of a face performing these expressions. To measure smiles, we added the probability of the right and left sides of the mouth. I found that a probability of 0.9 for smiles and 0.1 for cheeks and tongue worked best.

We take the possible values and add the text to the Analysis string.

Now that we have the method set up, let’s update the renderer(_didUpdate:) method!

func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
    if let faceAnchor = anchor as? ARFaceAnchor, let faceGeometry = node.geometry as? ARSCNFaceGeometry {
        faceGeometry.update(from: faceAnchor.geometry)
        expression(anchor: faceAnchor)
        DispatchQueue.main.async {
            self.faceLabel.text = self.analysis
        }
    }
}
Copy the code

Now whenever sceneView is updated, the expression method is executed. Since this function sets the Analysis string, we can finally set the faceLabel text to the Analysis string.

We’re done with all the code! Run the code and you should get what you saw at the beginning of this article.

conclusion

There are many possibilities behind using ARKit to develop face-based user experiences. Games and apps can use the protosensor lens for a variety of purposes. One of my favorite apps is Hawkeye Access, a browser that allows users to control it with their eyes.

If you want to learn more about the original sensor lens, you can check out Apple’s official video Face Tracking with ARKit. You can also download the project on GitHub.

Recommended at the end of the article: iOS popular anthology

  • 1.BAT and other major manufacturers iOS interview questions + answers

  • 2.Must-read books for Advanced Development in iOS (Classics must Read)

  • 3.IOS Development Advanced Interview “resume creation” guide

  • (4)IOS Interview Process to the basics