Brief introduction of TWO-DIMENSIONAL code

The TWO-DIMENSIONAL Code (Quick Response Code, QRCode) is a two-dimensional bar Code designed by horizontal and vertical lines, which can store data information. This paper mainly introduces the reading of two-dimensional Code (not involving the generation of two-dimensional Code).

Qr code reading

introduce

To read the TWO-DIMENSIONAL code is to obtain the data information by scanning the two-dimensional code image. Any barcode scanning is based on video collection, so the AVFoundation framework is needed

Here is an overview of the AVFoundation library

The AVFoundation framework combines six major technology areas that together encompass a wide range of tasks for capturing, processing, synthesizing, controlling, importing and exporting audiovisual media on Apple platforms.

The AVFoundation framework combines six major technical areas that collectively cover the broad tasks of capturing, processing, synthesizing, controlling, importing and exporting audiovisual media on the Apple platform.

For the reading of two-dimensional code, we mainly use the Capture part of the library, namely AVCaptureSession class. The following is an overview

You can see that this class inherits from NSObject, and its main function isUsed to manage capture activities and coordinate the flow of data from input devices to capture devices

Overview of scanning Process

The process of scanning two-dimensional code is the process of capturing two-dimensional code image (input) from the camera to parsing out string content (output), which is mainly realized by AVCaptureSession object

AVCaptureSession object is used to coordinate the flow of data from input to output. During execution, input and output need to be added to the object, and then start or stop the flow by sending startRunning and stopRunning messages. Finally through AVCaptureVideoPreviewLayer object will capture the video displayed on the screen

Where, the input object is usually AVCaptureDeviceInput object, which is obtained by an instance of AVCaptureDevice, and the output object is usually AVCaptureMetaDataOutput object, which is the core part of reading two-dimensional code. Need to combine AVCaptureMetaDataOutputObjectsDelegate protocol used in combination, can capture any metadata found in the input device (the metadata is the meaning of metadata), and transform it into the format of the string

Let’s detail each process in combination with the code

Specific steps

1. Import AVFoundation framework

#import <AVFoundation/AVFoundation.h>
Copy the code

2. Determine the permission

Since a camera is needed in the process of scanning the TWO-DIMENSIONAL code, we need to set the camera’s permission and make judgment

Step 1: Set permissions

There are two ways to set permissions

  1. Add it directly to the info.plist file

Open it again with Source Code, and you can see that two lines of code have been automatically added

Therefore, we can also add the corresponding code directly to the source code, which is the following method

  1. Add it to the info.plist file using source code
<key>NSCameraUsageDescription</key> <string> Obtain camera permission </string>Copy the code

By analogy, other permission Settings that need to be obtained can also be achieved through the above two methods, such as microphone and geographical location, as shown in the figure

Step 2: Determine permissions

The code is abbreviated, and the core is as follows

# pragma mark - judge authority - (void) judgeAuthority {/ / judge authority method [AVCaptureDevice requestAccessForMediaType: AVMediaTypeVideo CompletionHandler :^(BOOL granted) {dispatch_async(dispatch_get_main_queue(), ^ {/ / if you have authorized the if (granted) {/ / call the method that scan qr code} else {/ / without authorization, prompt pop-up...}});}]; }Copy the code

And one of the most core part is requestAccessForMediaType: AVMediaType mediaType completionHandler: void (^) (BOOL granted)) handler; This method, used to request permission, takes two parameters

  • The first parameterAVMediaTypeMedia type

There are several types

  • The second argument is a block block, write the relevant judgment code

For the popover, we’re using UIAlertController

  1. createUIAlertControllerobject
NSString *title = @" Please allow App access to your camera under Settings, Privacy, camera on iPhone "; UIAlertController * alert = [UIAlertController alertControllerWithTitle: @ "prompt message:" the title preferredStyle:UIAlertControllerStyleAlert];Copy the code
  1. createUIAlertActionObject, that is, button
UIAlertAction * conform = [UIAlertAction actionWithTitle: @ "confirmed" style: UIAlertActionStyleDefault handler: ^ (UIAlertAction * _Nonnull action) {NSLog(@" click ok ");}]; UIAlertAction * cancel = [UIAlertAction actionWithTitle: @ "cancel" style: UIAlertActionStyleDefault handler: ^ (UIAlertAction * _Nonnull action) {NSLog(@" hit cancel button ");}];Copy the code
  1. Add the button to the alert
[alert addAction:conform];
[alert addAction:cancel];
Copy the code
  1. According to the pop-up
[self presentViewController:alert animated:YES completion:nil];
Copy the code

The display is as follows. First, the system will automatically pop up to request the camera permission

If you click “Do not allow”, you cannot get the camera permission, and the popup window will be displayed

3. Create AVCaptureSession objects

@property (nonatomic, strong) AVCaptureSession *captureSession;

_captureSession = [[AVCaptureSession alloc]init];
Copy the code

4. Add input and output to the AVCaptureSession object

/ / 1. AVCaptureDevice * device initialization equipment = [AVCaptureDevice defaultDeviceWithMediaType: AVMediaTypeVideo]; //2. Create input, Based on the input to the device instance AVCaptureDeviceInput * deviceInput = [AVCaptureDeviceInput deviceInputWithDevice: device error: nil]; AVCaptureMetadataOutput *metadataOutput = [AVCaptureMetadataOutput alloc] init]; //4. AddInput/output [_captureSession addInput:deviceInput]; [_captureSession addOutput:metadataOutput];Copy the code

5. Configure the AVCaptureMetaDataOutput object

The first step is to set up the proxy, followed by the metadata type

/ / 1. Set up agency [metadataOutput setMetadataObjectsDelegate: self queue: dispatch_get_main_queue ()]; / / 2. Set the metadata types, because here is the scan of qr code, so the data type is AVMetadataObjectTypeQRCode, Attention was introduced into array need to [metadataOutput setMetadataObjectTypes: @ [AVMetadataObjectTypeQRCode]].Copy the code

6. Create and set AVCaptureVideoPreviewLayer object to display the captured video

@property (nonatomic, strong) AVCaptureVideoPreviewLayer *videoPreviewLayer; / / display layer / / 1. Instantiate preview coating layer _videoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession: _captureSession]; / / 2. Set the preview layer filling mode [_videoPreviewLayer setVideoGravity: AVLayerVideoGravityResizeAspectFill]; //3. Set the layer's frame [_videoPreviewLayer setFrame: _viewpreview.layer.bounds]; [_viewPreview. Layer addSublayer:_videoPreviewLayer]; / / 5. Set the scan range, used here is the relative position metadataOutput. RectOfInterest = CGRectMake (0.2 f to 0.2 f to 0.8 f to 0.8 f);Copy the code

This uses the property rectOfInterest, which sets the search area for the metadata, and determines the rectangle, whose origin is in the upper left corner

7. Implement proxy methods

#pragma mark --AVCaptureMetadataOutputObjectsDelegate - (void)captureOutput:(AVCaptureOutput *)output didOutputMetadataObjects:(NSArray<__kindof AVMetadataObject *> *)metadataObjects fromConnection:(AVCaptureConnection *)connection{// Determine if data is being read if (! _isReading) {// No reading, return; If (metadataobjects.count > 0) {_isReading = NO; // AVMetadataMachineReadableCodeObject *metadataObject = metadataObjects[0]; NSString *result = metadataObject.stringValue; if (self.resultBlock) { self.resultBlock(result); } dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(1 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{ [self.navigationController popViewControllerAnimated:YES]; }}Copy the code

Problems and solutions encountered during development

Actually, there’s a little bit of that up there like permissions and things like that, and then there’s a couple of points

1. The program runs in a black screen

Add the UIWindow property to appdelegate. h

@property (nonatomic, strong)UIWindow *window;
Copy the code

2. The title is not displayed in the navigation bar

The title property inherits from UIViewController, not the name on UINavigationController

Because UINavigationController is a container, you need at least one RootVIewController

Then set the title in the viewDidLoad of RootViewController instead of in the subclass of UINavigationController

The self. The navigationController. Title = @ "scan"; Self. Title = @" scan "; // Change to selfCopy the code

3. The button is not centered

// The button is horizontal to the right, Vertical direction "centered" BTN. Frame = CGRectMake (self. View. Bounds. Size. The width / 2.0, the self. The bounds. Size, height / 2.0, 80, 40). // The solution, Horizontal direction - 40 can be entangled with (haven't) BTN. Frame = CGRectMake (self. View. Bounds. Size. Width / 2.0-40, The self. The bounds. Size. Height / 2.0, 80, 40);Copy the code

But there’s a better way to do it than to just set center

Btn. frame = CGRectMake(0,0, 80, 40); // Determine the center, that is, the coordinate btn.center = self.view.center;Copy the code

4. Transfer the block attribute value

Briefly comb through the code of block attribute values

[BTN addTarget:self action:@selector(jumpToScanVC)] forControlEvents:UIControlEventTouchUpInside]; -(void)jumpToScanVC{ SecondViewController *secondVC = [[SecondViewController alloc]init]; secondVC.secondBlock = ^(NSString * _Nonnull string) { self.label.text = string; }; [self.navigationController pushViewController:secondVC animated:NO]; }Copy the code
// define the block attribute @interface SecondViewController: UIViewController @property (nonatomic, copy) void(^secondBlock)(NSString *string); @endCopy the code
// SecondViewController.m

-(void)touchesBegan:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event{
    if (_secondBlock) {
        _secondBlock(@"hahaha");
    }
    [self.navigationController popViewControllerAnimated:NO];
}
Copy the code

You can see that block is called in the second controller and the value is passed to the first controller

5. Usage of UIAlertController

The scan results are displayed and popup, but the controller is not successfully popped out, indicating an error display

popViewControllerAnimated: called on <UINavigationController 0x101827e00> while an existing transition or presentation is occurring; the navigation stack will not be updated.
Copy the code

It’s just that our UIAlertController’s popover animation conflicts with our pop controller’s animation

But actually our navigation stack has popped off, it just can’t show the animation

Therefore, we can delay the execution to complete the popover animation first and then the POP animation

Dispatch_after (dispatch_time(DISPATCH_TIME_NOW, (int64_t)(0.5 * NSEC_PER_SEC)), dispatch_get_main_queue(), dispatch_get_main_queue() ^{ [self.navigationController popViewControllerAnimated:YES]; });Copy the code

6. After you click Scan, the page is displayed but the scan box is not displayed. The controller is stuck

The reason was that the UI was not put in the main thread to refresh, causing it to freeze

7. Prevent NSTimer from causing unreleasable problems

Because the scanline is set up (just a decoration, for a professional purpose), the scanline is moved using a timer, but note that the use of the timer can cause memory to be unable to free, so we need to empty the NSTimer object beforehand

#pragma mark -- stop -(void)stopRunning{// Determine whether the timer is working, If ([_timer isValid]) {// Invalidate [_timer invalidate]; // set timer to nil _timer = nil; } [self.captureSession stopRunning]; . }Copy the code

Running interface Demo

Qr code picture


Refer to the blog

Ios uses AVFoundation to read two-dimensional code method

IOS uses AVCaptureSession to realize TWO-DIMENSIONAL code scanning