Has been looking for online scan code project can not quickly and high recognition rate of scanning and resolving the TWO-DIMENSIONAL code and distress, Zxing their own modify inheritance of difficulty and relatively high (food is the original sin). Fortunately, after the guidance of high people, contact with Google MLkit project, the actual experience down, is really good, in this to do an experience to share, but also for their own future can remember to see the effect of inheritance after the map

Compared with wechat scan code, can not be said to be exactly the same, but at least there is no relationship

Such a mini TWO-DIMENSIONAL code can also be resolved oh

Now, how do we do that

First, let’s take a look at the MLKit project introduction developers.google.cn/ml-kit

As you can see, this project in addition to the scan function, there are a variety of other good and free SDK, such as text recognition, face recognition, etc., can rely on this project easily. But my younger brother is unskilled and uneducated. I only touch the scan code function of the project. When my level is improved in the future, I will experience other functions one by one

By the way, this project in addition to Android terminal, there are also IOS terminal oh, interested IOS partners can refer to

To use this project, it is essential to introduce the project first

dependencies {
      // ...
      // Use this dependency to bundle the model with your app
      implementation 'com. Google. Mlkit: barcode scanning: 16.1.1'
    }
Copy the code

If you have access to Google Play, you can add references to Google Play, or even use the Google Play Model

dependencies {
      // ...
      // Use this dependency to use the dynamically downloaded model in Google Play Services
      implementation 'com. Google. Android. GMS: play - services - mlkit - barcode scanning: 16.1.4'
    }
Copy the code
<application ...>
          ...
          <meta-data
              android:name="com.google.mlkit.vision.DEPENDENCIES"
              android:value="barcode"/ > <! -- To use multiple models: android:value="barcode,model2,model3" -->
      </application>
Copy the code

CameraX support must be added as well

/ / version number
    def camerax_version = "1.0.0 - rc03"
    // Camera and Camera 2 support, optional
    implementation "androidx.camera:camera-core:${camerax_version}"
    implementation "androidx.camera:camera-camera2:${camerax_version}"
Copy the code

The project uses a framework based on Jetpack Lifecycle, so lifecycle support is added

    implementation "androidx.camera:camera-lifecycle:${camerax_version}"
Copy the code

The preview uses the built-in preview control from Camerax

    implementation "Androidx. Camera: the camera - view: 1.0.0 - alpha22"
Copy the code

Next is the specific code to achieve the first, with the camera and album, naturally need to add the corresponding permission application

ActivityCompat.requestPermissions(this, arrayOf(Manifest.permission.CAMERA,Manifest.permission.WRITE_EXTERNAL_STORAGE),REQUEST_PERMISSION)
Copy the code

Let’s take a look at the implementation of the scan interface, which mainly includes PreviewView for preview and ScanOverlay for drawing scan lines and scan results

<androidx.camera.view.PreviewView
        android:id="@+id/previewView"
        android:layout_width="match_parent"
        android:layout_height="match_parent"
        />
<com.hsmedia.mlkitdemo.ScanOverlay
        android:id="@+id/overlay"
        android:layout_width="match_parent"
        android:layout_height="0dp"
        app:layout_constraintTop_toBottomOf="@id/iv_exit"
        app:layout_constraintBottom_toTopOf="@id/tv_tips"
        android:layout_marginBottom="20dp"
        android:layout_marginTop="20dp"
        />
Copy the code

Now, how do I start the CameraX preview

CameraX comes with a listener to check whether the camera is available, so you can do more after the camera is available

cameraProviderFuture = ProcessCameraProvider.getInstance(this)
cameraProviderFuture.addListener(Runnable {
    val cameraProvider = cameraProviderFuture.get()
    bindScan(cameraProvider, overlay.width,overlay.height)
}, ContextCompat.getMainExecutor(this@BarcodeScanningActivity))
Copy the code

In addition, CameraX is bound to a life cycle control and can release the camera according to its life cycle, so you don’t have to worry about forgetting to turn off the camera

val preview : Preview = Preview.Builder().build()

// Bind preview
preview.setSurfaceProvider(previewView.surfaceProvider)

// Use the rear camera
val cameraSelector : CameraSelector = CameraSelector.Builder()
    .requireLensFacing(CameraSelector.LENS_FACING_BACK)
    .build()
// Bind the camera to the life cycle of the current control
camera = cameraProvider.bindToLifecycle(this as LifecycleOwner, cameraSelector, imageAnalysis, preview)
Copy the code

Now, preview is enabled. Notice that the code binds to the lifecycle using an imageAnalysis useCases, which is the component used for image scanning

// Configure image scanning
val imageAnalysis = ImageAnalysis.Builder()
    .setTargetResolution(Size(width, height))
    .setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
    .build()
Copy the code

Light is not enough, but also need to scan for the image configuration analysis scan code content parser QRCodeAnalyser, said so much, finally use MLKit two-dimensional code analysis to look at the specific implementation of QRCodeAnalyser

@SuppressLint("UnsafeExperimentalUsageError")
    override fun analyze(imageProxy: ImageProxy){ val mediaImage = imageProxy.image ? : kotlin.run { imageProxy.close()return
        }
val image = InputImage.fromMediaImage(mediaImage, imageProxy.imageInfo.rotationDegrees)
detector.process(image)
        .addOnSuccessListener { barCodes ->
            if (barCodes.size > 0){
                listener.invoke(barCodes[0],imageProxy.width,imageProxy.height)
                // When the result is received, parsing is closed
                detector.close()
            }
        }
        .addOnFailureListener { Log.d(TAG, "Error: ${it.message}") }
        .addOnCompleteListener { imageProxy.close() }

    }
Copy the code

This class mainly implements ImageAnalysis.Analyzer and analyze method, where imageProxy is the main content of the image scanned from CameraX. The main content is scanned based on the image scanned from CameraX and analyzed by detector. Where does the Detector come from

 // Set the current scan format
private val options = BarcodeScannerOptions.Builder()
    .setBarcodeFormats(
        Barcode.FORMAT_QR_CODE,
        Barcode.FORMAT_AZTEC
    )
    .build()
// Get the parser
private val detector = BarcodeScanning.getClient(options)
Copy the code

BarcodeScanning here is the QR code parsing component provided by MLKit

After parsing, perform subsequent operations on the code scanning result

// Bind image scan resolution
imageAnalysis.setAnalyzer(Executors.newSingleThreadExecutor(), QRCodeAnalyser{
    barcode,imageWidth,imageHeight->
    // Unbind all current camera operations
    cameraProvider.unbindAll()
    // Initialize the scalinginitScale(imageWidth,imageHeight) barcode.boundingBox? .let {// Scan the outer border rectangle of the QR code
        overlay.addRect(translateRect(it))
        Log.i(TAG, "bindScan: left:${it.left} right:${it.right} top:${it.top} bottom:${it.bottom}")
    }
    Handler().postDelayed({
        // The result will be returned after 1S delay
        val intent = Intent()
        intent.putExtra(SCAN_RESULT,barcode.rawValue)
        setResult(Activity.RESULT_OK, intent)
    },1000)})Copy the code

It should be noted that the drawing area of the overlay on the mobile phone is not the same as the actual image scanning area. Therefore, it is necessary to convert the outer border rectangle of the QR code in the returned content. Otherwise, the position of the final drawing result point will be wrong

First, the scaling scale is initialized based on the size of the drawing area and the size of the image scan area

private fun initScale(imageWidth : Int, imageHeight : Int){
    if(isPortraitMode(this)){
        scaleY = overlay.height.toFloat() / imageWidth.toFloat()
        scaleX = overlay.width.toFloat() / imageHeight.toFloat()
    }else{
        scaleY = overlay.height.toFloat() / imageHeight.toFloat()
        scaleX = overlay.width.toFloat() / imageWidth.toFloat()
    }
}
Copy the code

It’s important to note here that since the actual camera and scan Angle is 90 degrees different in portrait mode, the actual scale here is calculated using the height of the draw area and the width of the scan area, the width of the draw area and the height of the scan area

private fun translateX(x: Float): Float = x * scaleX
private fun translateY(y: Float): Float = y * scaleY

// Convert the scanned rectangle to the current screen size
private fun translateRect(rect: Rect) = RectF(
    translateX(rect.left.toFloat()),
    translateY(rect.top.toFloat()),
    translateX(rect.right.toFloat()),
    translateY(rect.bottom.toFloat())
)
Copy the code

According to the proportion to calculate the actual two-dimensional code outer border rectangle, and calculate the final drawing point, draw

The final scan effect, whether it is the scan speed, or the resolution rate, I think it is higher than the previous zxing-based scan code project, interested partners can try it

Demo link, extraction code: 8SN6

Demo has been uploaded to Baidu Cloud, originally wanted to upload to Github, but home network problems, really can not upload, etc later free to try again