Human beings are about to enter the era of artificial intelligence, as an ordinary programmer we in addition to add, delete, change and check, but also should be exposed to new technologies and things. In this paper, I will lead you to make a small demo, using SwiftUI and CoreML components to make an intelligent App for object recognition

Everyone can learn artificial intelligence

I actually wanted to write about machine learning, but I was scared off. Please rest assured that this article will never appear a formula, nor esoteric theory, through a real small example to let you master the method of making intelligent applications.

I strive to complete the production of intelligent recognition App within 300 lines of code.

Look before you learn the real style

Let’s take a look at the final result before we do that

As you can see, we create a scroll view that lists the photos to be recognized. We found some photos of cows, cats, dogs and mountains to test the App’s recognition effect.

It must be hard to forge iron

Next, let’s implement the app step by step!

Step 1: Let’s start by creating a scroll view that lets the user select photos to recognize.

1. Define an array to store the name of the photo to be recognized

// Define an array to store the name of the photo to be recognizedlet images = ["niu"."cat1"."dog"."tree"."mountains"]
Copy the code

2,


//
 VStack {
                    ScrollView([.horizontal]) {
                        HStack {
                            ForEach(self.images,id: \.self) { name in
                                Image(name)
                                    .resizable()
                                    .frame(width: 300, height: 300)
                                    .padding()
                                    .onTapGesture {
                                        self.selectedImage = name
                                }.border(Color.orange, width: self.selectedImage == name ? 10 : 0)
                            }
                        }
                    }
Copy the code

Step two: Complete the page

import SwiftUI

struct ContentView: View {
    
    let images = ["niu"."cat1"."dog"."tree"."mountains"]
    @State private var selectedImage = ""
    
    @ObservedObject private var imageDetectionVM: ImageDetectionViewModel
    private var imageDetectionManager: ImageDetectionManager
    
    init() {
        self.imageDetectionManager = ImageDetectionManager()
        self.imageDetectionVM = ImageDetectionViewModel(manager: self.imageDetectionManager)
    }
    
    var body: some View {
        NavigationView {
            VStack{
                HStack{
                    Text("Identification Result :")
                        .font(.system(size: 26))
                        .padding()
                    
                    Text(self.imageDetectionVM.predictionLabel)
                        .font(.system(size: 26))
                }
                
                VStack {
                    ScrollView([.horizontal]) {
                        HStack {
                            ForEach(self.images,id: \.self) { name in
                                Image(name)
                                    .resizable()
                                    .frame(width: 300, height: 300)
                                    .padding()
                                    .onTapGesture {
                                        self.selectedImage = name
                                }.border(Color.orange, width: self.selectedImage == name ? 10 : 0)
                            }
                        }
                    }
                    
                    Button("Intelligent identification") { self.imageDetectionVM.detect(self.selectedImage) }.padding() .background(Color.orange) .foregroundColor(Color.white)  .cornerRadius(10) .padding() Text(self.imageDetectionVM.predictionLabel) .font(.system(size: 26)) } } .navigationBarTitle("Core ML")}}}Copy the code

The third step is to go through the MVVM and build a model for photo recognition

import Foundation
import SwiftUI
import Combine

class ImageDetectionViewModel: ObservableObject {
    
    var name: String = ""
    var manager: ImageDetectionManager
    
    @Published var predictionLabel: String = ""
    
    init(manager: ImageDetectionManager) {
        self.manager = manager
    }
    
    func detect(_ name: String) {
        
        let sourceImage = UIImage(named: name)
        
        guard let resizedImage = sourceImage? .resizeImage(targetSize: CGSize(width: 224, height: 224))else {
            fatalError("Unable to resize the image!")}if let label = self.manager.detect(resizedImage) {
            self.predictionLabel = label
        }
      
    }
    
}
Copy the code

Step 4 Identify the business logic

import Foundation
import CoreML
import UIKit

class ImageDetectionManager {
    
    let model = Resnet50()
    
    func detect(_ img: UIImage) -> String? {
        
        guard let pixelBuffer = img.toCVPixelBuffer(),
            let prediction = try? model.prediction(image: pixelBuffer) else {
                return nil
        }
        
        return prediction.classLabel
        
    }
    
}

Copy the code

IOS artificial intelligence project complete code

Download address: www.jianshu.com/p/f7cbf5e72…

More SwiftUI tutorials and code focus columns

QQ:3365059189 SwiftUI Technical exchange QQ group :518696470

  • Please pay attention to my column icloudend SwiftUI tutorial and the source www.jianshu.com/c/7b3e3b671…