I. Introduction to AVFoundation

AVFoundation is an audio and video framework released by Apple after 8.0.

The most powerful feature of AVFoundation is the ability to capture photos & videos. For example, small videos and live broadcasts in some apps can be captured by AVFoundation.

AVFoundation common classes

2.1. Capture sessions

Capturing sessions is mainly done using the AVCaptureSession class, which is similar to a row plug, and various devices need to be associated with capturing sessions.

2.2. Capture device

AVCaptureDevice allows you to obtain various hardware devices of your mobile phone, such as microphone, front and rear cameras, and flash.

2.3. Capture device input

The input of the device can be captured using AVCaptureDeviceInput.

In AVFoundation, capture device input cannot be added directly to the Session, so you need to convert capture device input to capture device to add it to the Session.

2.4. Capture device output

There is an input and there is an output. After iOS10.0, AVCapturePhotoOutput can be used to get image output and AVCaptureMovieFileOutput can be used to get video output. AVCaptureAudioDataOutput, AVCaptureVideoDataOutput, etc.

2.5. Capture connections

AVCaptureConnection can establish a connection based on the type of media being captured

2.6. Capture previews

AVCaptureVideoPreviewLayer is mainly a layer, is mainly used to display the camera capture real-time content.

Iii. Simple use of AVFoundation

You need to configure user privacy requirements for cameras, microphones, and photo albums.

3.1. Configuring sessions

  1. Creating a session
  2. Set resolution
  3. Creating a capture device
  4. Convert capture device to capture device input
  5. Add the capture device input to the session (the adding process needs to be aware of whether it can be added to the session)
  6. Configure the output of the capture device
  7. Add the capture device output to the session (the adding process needs to be aware of whether it can be added to the session)
#pragma mark - session setting
/ / / configuration session
/// @param error Error callback
- (BOOL)setupSession:(NSError **)error {
    /** * Add a video input category */
    / / initialization
    self.captureSession = [[AVCaptureSession alloc] init];
    // Set resolution
    self.captureSession.sessionPreset = AVCaptureSessionPresetHigh;
    
    // Get the default video capture device: iOS default rear camera is the default video capture color
    AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
    
    // Be sure to convert the capture device to AVCaptureDeviceInput
    // Note: To add a capture device to your session, you must encapsulate this as an AVCaptureDeviceInput object
    AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:error];
    
    if (videoInput) {
        // The camera does not belong to any app. It is a public device and needs to be determined whether it can be added
        if ([self.captureSession canAddInput:videoInput]) {
            [self.captureSession addInput:videoInput];
            self.activeVideoInput = videoInput;// The camera can be placed in front and rear. You need to save the camera for switching}}else {
        return NO;
    }

    
    /** * Add audio input device */
    // Add audio input device: microphone
    AVCaptureDevice *audioDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
    AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:error];
    if (audioInput) {
        if ([self.captureSession canAddInput:audioInput]) {
            [self.captureSession addInput:audioInput];
            // Audio input is microphone only, no need to save}}else {
        return NO;
    }
    
    /** * Set output (photo/video file) */
    / / picture
    self.imageOutput = [[AVCapturePhotoOutput alloc] init];
    
    if ([self.captureSession canAddOutput:self.imageOutput]) {
        [self.captureSession addOutput:self.imageOutput];
    }

    // Video AVCaptureMovieFileOutput instance, QuickTime
    self.movieOutput = [[AVCaptureMovieFileOutput alloc] init];
    if ([self.captureSession canAddOutput:self.movieOutput]) {
        [self.captureSession addOutput:self.movieOutput];
    }
    
    // Video queue
    self.videoQueue = dispatch_queue_create("glen.videoQueue".NULL);
    
    return YES;;
}

Copy the code

After the capture session is configured, you need to tell AVFoundation to start or stop the capture session through external button clicks.

// start capture
- (void)startSession {
    if(! [self.captureSession isRunning]) {
        dispatch_async(self.videoQueue, ^{
            [self.captureSession startRunning]; }); }}/// stop the capture
- (void)stopSession {
    if ([self.captureSession isRunning]) {
        dispatch_async(self.videoQueue, ^{
            [self.captureSession stopRunning]; }); }}Copy the code

3.2 Camera switching

Gets the available camera devices on the current device and obtains the specified camera device based on requirements

/// find the specified camera
/// @param positon Specifies the camera device
- (AVCaptureDevice *)cameraWithPositon:(AVCaptureDevicePosition)positon {
    
    AVCaptureDeviceDiscoverySession *captureDeviceDiscoverySession = [AVCaptureDeviceDiscoverySession discoverySessionWithDeviceTypes:@[AVCaptureDeviceTypeBuiltInWideAngleCamera]
                                      mediaType:AVMediaTypeVideo
                                      position:AVCaptureDevicePositionUnspecified];
    // Get all devices
    NSArray *captureDevices = [captureDeviceDiscoverySession devices];
    // Walk through the device
    for (AVCaptureDevice *device in captureDevices) {
        if (device.position == positon) {
            returndevice; }}return nil;
}

Copy the code

Since there are multiple cameras, it is important to know which one is currently in use

/// Get the currently active camera
- (AVCaptureDevice *)activeCamera {
    return self.activeVideoInput.device;
}

/// Get another inactive camera
- (AVCaptureDevice *)inactiveCamera {
    
    AVCaptureDevice *device = nil;
    if (self.cameraCount > 1) {
        
        if ([self activeCamera].position == AVCaptureDevicePositionBack) {
            // change postposition to preposition
            device = [self cameraWithPositon:AVCaptureDevicePositionFront];
        } else if ([self activeCamera].position == AVCaptureDevicePositionFront) {
            // change the preposition to the postposition
            device = [self cameraWithPositon:AVCaptureDevicePositionBack]; }}return device;;
}

Copy the code

Before switching, you must know if the other cameras are in a usable state

/// Whether the camera can be switched
- (BOOL)canSwitchCameras {
    return self.cameraCount > 1;
}
Copy the code

The next step is to switch the camera

/// Switch the camera
- (BOOL)switchCameras {
    
    // Check whether the switch can be performed
    if(! [self canSwitchCameras]) {
        return NO;
    }
    
    // Get the reverse device of the current device (inactive camera)
    AVCaptureDevice *device = [self inactiveCamera];
    
    // Add device to AVCaptureDeviceInput
    NSError *error;
    AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
    
    // Add to the session
    if (videoInput) {
        // Indicate that the original configuration is to be changed
        [self.captureSession beginConfiguration];
        
        // Remove the original input device
        [self.captureSession removeInput:self.activeVideoInput];
        
        // Check whether you can join
        if ([self.captureSession canAddInput:videoInput]) {
            [self.captureSession addInput:videoInput];
            // Active device updates
            self.activeVideoInput = videoInput;
        } else {
            // If the new device cannot be added, add the original video input device
            [self.captureSession addInput:self.activeVideoInput];
        }
        
        // Commit the configuration modification
        [self.captureSession commitConfiguration];
        
    } else {
        // If error! Device adding error
        return NO;
    }
    return YES;
}
Copy the code

3.3, focus

  1. Get the current device
  2. Determine whether the device supports focusing.
  3. The device cannot be configured to be changed by multiple objects, so the device is locked
  4. Set focus, focus mode, etc.
  5. Unlock the device
/// ask whether the currently active camera supports interest point focusing
- (BOOL)cameraSupportsTapToFocus {
    return [[self activeCamera] isFocusPointOfInterestSupported];
}

// set the focus
- (void)focusAtPoint:(CGPoint)point {
    
    AVCaptureDevice *device = [self activeCamera];
    
    // Check whether the device supports point of interest focusing or whether it supports auto focusing
    if (device.isFocusPointOfInterestSupported && [device isFocusModeSupported:AVCaptureFocusModeAutoFocus]) {
        
        NSError *error;
        // The procedure is locked because it cannot be modified by multiple objects during configuration
        if ([device lockForConfiguration:&error]) {
            
            // Focus position
            device.focusPointOfInterest = point;
            
            // Auto focus mode
            device.focusMode = AVCaptureFocusModeAutoFocus;
            
            // Unlock the account after the modification is complete
            [device unlockForConfiguration];
        } else {
            // Device error}}}Copy the code

3.4 exposure,

  1. Get the current active device
  2. Create exposure Mode
  3. Checks whether the device supports the specified mode
  4. Lock the device
  5. Set exposure point and exposure mode
  6. Determines whether lock exposure is supported
    1. If yes, use KVO to set the status
    2. Listen for the callback to get the device
    3. Determine whether exposure is supported
    4. Remove the observer and modify the exposure mode of the device
  7. Unlock the device
  8. Externally an interface for resetting focus exposure
static const NSString *CameraAdjustingExposureContext;

// Whether the active camera supports exposure
- (BOOL)cameraSupportsTapToExpose {
    return [[self activeCamera] isExposurePointOfInterestSupported];
}

- (void)exposeAtPoint:(CGPoint)point {
    
    // Get active camera
    AVCaptureDevice *device = [self activeCamera];
    // Set exposure according to the scene
    AVCaptureExposureMode exposureMode = AVCaptureExposureModeContinuousAutoExposure;
    // Whether the active camera supports exposure and supports' exposure by scene 'mode
    if (device.isExposurePointOfInterestSupported && [device isExposureModeSupported:exposureMode]) {

        // Process lock
        NSError *error;
        if ([device lockForConfiguration:&error]) {
            // Device exposure point
            device.exposurePointOfInterest = point;
            // Set exposure mode
            device.exposureMode = exposureMode;
            
            // Whether lock exposure is supported
            if ([device isExposureModeSupported:AVCaptureExposureModeLocked]) {
                AdjustingExposure public adjustingExposure public adjustingExposure public adjustingExposure
                [device addObserver:self forKeyPath:@"adjustingExposure" options:NSKeyValueObservingOptionNew context:&CameraAdjustingExposureContext];
            }
            
            / / unlock[device unlockForConfiguration]; }}}/// Observer callback
- (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary<NSKeyValueChangeKey.id> *)change context:(void *)context {
    
    if (context == &CameraAdjustingExposureContext) {
        
        // Get the device
        AVCaptureDevice *device = (AVCaptureDevice *)object;
        / / whether the equipment is no longer adjust exposure level, to determine whether the equipment exposureMode can be set to AVCaptureExposureModeLocked
        if(! device.isExposurePointOfInterestSupported && [device isExposureModeSupported:AVCaptureExposureModeLocked]) {
            
            AdjustingExposure = adjustingExposure (bool adjustingExposure
            [object removeObserver:self forKeyPath:@"adjustingExposure" context:&CameraAdjustingExposureContext];

            //
            dispatch_async(dispatch_get_main_queue(), ^{
                if ([device lockForConfiguration:nil]) {
                    device.exposureMode = AVCaptureExposureModeLocked;
                    [device unlockForConfiguration];
                } else {
                    // Device error callback}}); }else{[superobserveValueForKeyPath:keyPath ofObject:object change:change context:context]; }}}// Reset the focus & exposure
- (void)resetFocusAndExposureModes {

    AVCaptureDevice *device = [self activeCamera];

    AVCaptureFocusMode focusMode = AVCaptureFocusModeContinuousAutoFocus;
    
    // Get focus interest and continuous autofocus mode are supported
    BOOL canResetFocus = [device isFocusPointOfInterestSupported]&& [device isFocusModeSupported:focusMode];
    
    AVCaptureExposureMode exposureMode = AVCaptureExposureModeContinuousAutoExposure;
    
    // Confirm that exposure can be reset
    BOOL canResetExposure = [device isFocusPointOfInterestSupported] && [device isExposureModeSupported:exposureMode];
    
    // To recap, the center point of the capture device space is (0.5, 0.5)
    CGPoint centPoint = CGPointMake(0.5f, 0.5f);
    
    NSError *error;
    
    // Lock the device and prepare for configuration
    if ([device lockForConfiguration:&error]) {
        
        // If the focus can be set, change it
        if (canResetFocus) {
            device.focusMode = focusMode;
            device.focusPointOfInterest = centPoint;
        }
        
        // If exposure is available, set it to the desired exposure mode
        if (canResetExposure) {
            device.exposureMode = exposureMode;
            device.exposurePointOfInterest = centPoint;
            
        }
        
        // Release the lock
        [device unlockForConfiguration];
        
    }else
    {
        
        // Device error callback}}Copy the code

3.5 and take photos

  1. Exposed camera interface captureStillImage
  2. Set the setting and proxy for the image output
  3. Obtain the data data of the picture through the proxy
  4. Turn the image’s data into a UIImage
  5. Save the UIImage to the phone album via PHPhotoLibrary, and notify the outside of a UIImage to display the abbreviated image
#pragma Mark - Take a photo
- (void)captureStillImage {

    // Capture the image in JPG format
    NSDictionary *setDic = @{AVVideoCodecKey:AVVideoCodecTypeJPEG};
    AVCapturePhotoSettings *outputSettings = [AVCapturePhotoSettings photoSettingsWithFormat:setDic];
    [self.imageOutput capturePhotoWithSettings:outputSettings delegate:self];

}

// Proxy method
- (void)captureOutput:(AVCapturePhotoOutput *)output didFinishProcessingPhoto:(AVCapturePhoto *)photo error:(nullable NSError *)error {

    // Image data
    NSData *imageData = photo.fileDataRepresentation;
    UIImage *image = [[UIImage alloc] initWithData:imageData];
    
    // Write the image to the Library
    [self writeImageToAssetsLibrary:image];
}


/// write to the album
/// @param image
- (void)writeImageToAssetsLibrary:(UIImage *)image {

    __block PHObjectPlaceholder *assetPlaceholder = nil;
    [[PHPhotoLibrary sharedPhotoLibrary] performChanges:^{
        PHAssetChangeRequest *changeRequest = [PHAssetChangeRequest creationRequestForAssetFromImage:image];
        assetPlaceholder = changeRequest.placeholderForCreatedAsset;
    } completionHandler:^(BOOL success, NSError * _Nullable error) {
        NSLog(@"OK");
        
        dispatch_async(dispatch_get_main_queue(), ^{
            NSNotificationCenter *nc = [NSNotificationCenter defaultCenter];
            [nc postNotificationName:ThumbnailCreatedNotification object:image];
        });
    
    }];

}

Copy the code