1, Animoji

Animoji is an animated emoji on the iPhone X. Is a new feature of the iPhone released in the 10th anniversary special edition iPhone X. It uses facial recognition sensors to detect changes in a user’s facial expression and ultimately generate cute 3D animated emojis, which are currently only available in iMessage. But someone hacked Animoji and extracted the corresponding header files for everyone to use in the product.


2, Factime

FaceTime is a video-calling app built into Apple’s iOS and Mac OS X that connects two faceTime-equipped devices to the Internet via Wi-Fi or cellular data. In layman’s terms, this is video calling.


It took about half a day to create Animoji while trying it out and thinking it would be fun to work with live video chat.


First Animoji:

Find someone’s wrapped Animoji header online github.com/efremidze/A… After researching, I found that there was no interface to capture a single frame of Animoji, so I continued to refine the library to reveal the interface to capture each frame of Animoji. I fork a code here github.com/notedit/Ani… , the modified code has also been PR into the original project.


Frame data processing:

CVPixelBuffer is needed for video transmission later, but here we can only get CIImage from Animoji. In addition, we need to scale the obtained data once to avoid the image being too large. Here mainly uses CoreImage and CVPixelBuffer some processing, directly on the code:

-(CVPixelBufferRef)processCIImage:(CIImage*)image {    

        @autoreleasepool  {        
            [filter setValue:image forKey:kCIInputImageKey];        
             CIImage *outimage = [filter outputImage];
             NSLog(@"outimage widthxheight %dx%d", (int)outimage.extent.size.width,(int)outimage.extent.size.height);        
             CVPixelBufferRef outPixelBuffer = NULL;        
              CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
                                              (int)outimage.extent.size.width,
                                              (int)outimage.extent.size.height,
                                              kCVPixelFormatType_420YpCbCr8BiPlanarFullRange ,
                                              (__bridge CFDictionaryRef) @{(__bridge NSString *) kCVPixelBufferIOSurfacePropertiesKey: @{}},
                                              &outPixelBuffer);
        if(status ! =0) {            
             NSLog(@"CVPixelBufferCreate error %d", (int)status);            
             CFRelease(outPixelBuffer);            
             return nil;        
        }        

        [ciContext render:outimage toCVPixelBuffer:outPixelBuffer bounds:outimage.extent colorSpace:nil];        
        returnoutPixelBuffer; }}Copy the code

At this point you can happily get Animoji frame data.


Video call:

To do this quickly I directly used our audio and video calling SDK – dotEngine, which supports custom YUV data entry, and then we could happily feed the data from the previous step into the SDK.

The general process is as follows

dotEngine = DotEngine.sharedInstance(with:self)
localStream = DotStream(audio: true, video: true, videoProfile: DotEngineVideoProfile.DotEngine_VideoProfile_360P, delegate: self)videoCapturer = DotVideoCapturer()localStream.videoCaptuer = videoCapturer// some other code

let image = self.animoji.snapshot(with:self.animoji.frame.size)

let ciimage = CIImage(image: image!)

let pixelBuffer = self.convert.processCIImage(ciimage)
ifpixelBuffer ! =nil {     
      self.videoCapturer? .send(pixelBuffer! .takeUnretainedValue(), rotation:VideoRotation.roation_0) pixelBuffer? .release() }Copy the code

The operation effect is shown as follows:





The complete code is available at github.com/dotEngine/a…

Our audio and video newsletter SDK dotEngine github.com/dotEngine