An overview of the

Whether you have noticed it or not, face recognition has come into every corner of life. Face clocking is now supported by The Internet, and self-verification channels for real-name authentication at railway stations have been added, not to mention the construction of “smart cities” and smart brains in various cities. In face recognition in the industry, usually by providers face recognition and face recognition application access, research and development from beginning to end face recognition technology requires strong special technical knowledge and mathematical algorithm, for most enterprises, AI (artificial intelligence engine company existing face recognition technology is a more suitable solution. Hongsoft company opened the face recognition platform version 1.0 in 2017, after three years of technical iteration and update, has launched version 2.2, the main offline, free, suitable for a wide range of scenarios. In order to facilitate the access of developers, hongsoft officially provides Demo programs of various language versions. Since hongsoft does not provide the C# version SDK, the C# version SDK provided by them is more valuable for reference.

Rainbow soft Demo download address is as follows: github.com/ArcsoftEscE… Before you start, I suggest you download it.

What is face recognition

Face recognition is a kind of biometric identification technology based on facial feature information. Using a camera or camera to collect images or video streams containing faces, and automatically detect and track faces in the image, and then carry out face recognition on the detected face of a series of related technologies, usually called portrait recognition, face recognition. The process of face recognition can be summarized as: detection of face frame -> extraction of face feature information -> face database retrieval matching information three processes.

Application scenarios of face recognition

Face recognition is mainly used for identity recognition. Due to the rapid popularization of video surveillance, many video surveillance applications urgently need a kind of fast identification technology under the condition of long distance and users’ uncooperation, in order to quickly confirm the identity of people from long distance and realize intelligent early warning. Face recognition technology is undoubtedly the best choice. Using fast face detection technology, we can find the face from the surveillance video image in real time, and compare it with the face database in real time, so as to achieve fast identity recognition. In real life, from the most common face access control, to real-name security checks, scenic spots, companies or schools face check-in, unmanned supermarkets and so on have a wide range of applications.

What is biopsies

Biopsies, as the name suggests, identify biological information on living objects to distinguish forged biometrics from photographs, silicone, plastic and other inanimate materials. In face recognition application, liveness detection technology is used to judge whether the face images collected by the system are from real faces, so as to prevent photos, videos and other forged face images from being input into the system to cause misjudgment. Liveness detection is of vital importance in the face recognition business under unattended scenes.

Rainbow soft face recognition SDK

At present there are a lot of face recognition technology solutions, from whether or not you need to use the network can be divided into online and offline, from the access can be divided into local recognition and big data server, rainbow soft offer is based on the characteristics of local algorithm off-line identification SDK, its basic algorithm using C, provide offline support of the whole platform.

Rainbow soft vision visual open platform

Hongsoft face recognition SDK is provided through the visual open platform, including the most commonly used functional components in face recognition scenarios, such as: Face detection, face recognition, age and gender detection, in vivo detection, among which face detection for static and dynamic detection scenarios respectively algorithm optimization, derived from the gender and age detection to expand the use of face recognition scenarios, in vivo detection components can effectively ensure the safety of face recognition applications. Visit ai.arcsoft.com.cn/third/mobil… Follow the instructions on the website to register users and download SDK packages.

Rainbow soft face recognition Demo introduction

Hongsoft SDK, different from a large number of Restful interface, does not use the common HTTP-based way, and does not provide C# LANGUAGE SDK package, only PROVIDES C language SDK, C# access to some difficulties, at the beginning of the release of a number of daishen self-written access Demo program, later, Hongsoft official Demo program, from the first version in January 2018 to now with the SDK update version 2.2, the code structure and annotations are clearer.

Demo Effect Display

Demo is a standard C# WINFORM project, downloaded from GitHub, can be directly opened using VS. Once opened, there is a readme.md file, which is very important, so be sure to look at it carefully before you start. Here’s a summary of the main points.

  1. Download ArcFace SDK of Win32/Win64. It is recommended to download version 2.2.
  2. Place the APPID and KEY generated during download in the corresponding position in the app.config file.
  3. Decompress the downloaded file and decompress the DLL to the directory of the corresponding platform according to the platform. If all the above steps are OK, the program can run normally. If there are problems in the process, please refer to the readme for troubleshooting.

After the general OK, the system pops up a normal operation window, online to find a few star photos for registration, comparison. As shown below:

It can be seen that Rainbow soft Demo has been able to correctly identify face information.

Demo also provides a live detection function, if your machine does not have a camera, you can plug in a USB camera, click enable camera, open it.

If we use our own face recognition, it will show RGB alive, if we try to identify with a photo or video, it will show “RGB prosthesis”.

Face recognition Demo code analysis

Next, let’s open the project view and analyze the code structure and main process of rainbow soft face recognition Demo from the point of view of code.

As you can see from the figure above, the code structure is still very clear

directory instructions
Entity Used to place some entity classes
lib The third library placed is mainly used to get the content of the video frame
SDKModels SDK field model class, mainly to interact with SDK, common use need not pay attention to
SDKUtils For C# encapsulation of SDK functionality, it is recommended to use the secondary encapsulation class in Utils
Utils Provides utility classes that simplify complex SDK operations and can be used directly in projects

All the interface functions are in FaceForm.cs. We open up the code view. The code structure of each area of the code is clear.

Parameters are defined

The parameter definition part is mainly for some parameters are defined, there are corresponding annotations, we need to pay attention to the picture size and similarity.

private long maxSize = 1024 * 1024 * 2;
Copy the code

This parameter defines the maximum image size that can be recognized and can be adjusted as needed.

Private float threshold = 0.8f;Copy the code

This parameter defines the confidence level, which is how similar we think it is to a person

Engine initialization

An important method in the initialization section, InitEngines(), is used to initialize the face recognition engine.

This part of the code first gets the configuration file information, then reads it, activates the engine, and pops up a message if an error occurs.

It is important to note that since C# supports multi-cpu architecture, the 32 and 64 bit versions of the rainbow software SDK have different DLLS, so it is up to us to determine which mode we are currently running in.

var is64CPU = Environment.Is64BitProcess;
Copy the code

After determining the CPU, try to load the corresponding DLL and invoke the activation procedure.

int retCode = 0; try { retCode = ASFFunctions.ASFActivation(appId, is64CPU ? sdkKey64 : sdkKey32); } catch (Exception ex) {// Disable related function buttons ControlsEnable(false, chooseMultiImgBtn, matchBtn, btnClearFaceList, chooseImgBtn); If (ex.message. Contains(" Cannot load DLL")) {MessageBox.Show(" Please put sdK-related DLL in x86 or x64 folder corresponding to bin!" ); } else {MessageBox.Show(" Engine activation failed!" ); } return; }Copy the code

The Rainbow Software SDK needs to be activated to use. When activated, you must ensure that your device can connect to the Internet. Activation fails if you cannot connect.

For the rest of the code, we configure the engine’s functionality, and in most cases, we leave the default Settings. If you need to adjust, you can focus on the following parameters

Int detectFaceScaleVal = 16; int detectFaceScaleVal = 16; Int detectFaceMaxNum = 5;Copy the code

DetectFaceScaleVal means the proportion of a face in an image. To put it simply, it means the proportion of a face in an image. The larger the value is, the smaller the face that can be detected. DetectFaceMaxNum is the maximum number of faces detected. The more faces detected, the more memory the program needs to use.

The next parameter, combinedMask, defines the capabilities of the engine. It is recommended to leave it on by default, or only necessary functions if performance is required.

/ / engine initialization time need to initialize the detection combination int combinedMask = FaceEngineMask. ASF_FACE_DETECT | FaceEngineMask. ASF_FACERECOGNITION | FaceEngineMask.ASF_AGE | FaceEngineMask.ASF_GENDER | FaceEngineMask.ASF_FACE3DANGLE;Copy the code

Call ASFFunctions ASFInitEngine can initialize the engine

      retCode = ASFFunctions.ASFInitEngine(detectMode, imageDetectFaceOrientPriority,
 detectFaceScaleVal, detectFaceMaxNum, combinedMask, ref pImageEngine);
Copy the code

If the return value of retCode is 0, the initialization is successful.

In accordance with the same method to initialize other engines, including face detection FR engine, RGB dedicated FR engine, IR dedicated RGB engine, they are just different parameters, in actual use, we can fine-tune according to the need.

Other similar operations can be viewed in this page. Because hongsoft Demo has encapsulated the operations in detail, the codes displayed in FaceForm.cs are all codes that interact with controls, so it is not meaningful to analyze them in detail. This is the implementation of some of the functions that lurk in the FaceUtil class.

Face detection

There are two ways to detect face information, from the photo detection and detection from the video, first look at the detection from the photo

public static ASF_MultiFaceInfo DetectFace(IntPtr pEngine, Image image) { lock (locks) { ASF_MultiFaceInfo multiFaceInfo = new ASF_MultiFaceInfo(); if (image ! = null) {/* If the photo size is too large, Are aligned to zoom in and * / if (image. The Width > 1536 | | image. The Height > 1536) {image = ImageUtil. ScaleImage (image, 1536, 1536); } else {/* If the photo size is normal, align it directly */ image = imageutil. ScaleImage(image, image.width, image.height); } if(image == null) { return multiFaceInfo; ImageInfo ImageInfo = imageutil.readbmp (image); if(imageInfo == null) { return multiFaceInfo; } /* DetectFace(pEngine, imageInfo); /* Memoryutil. Free(imageinfo.imgData); return multiFaceInfo; } else { return multiFaceInfo; }}}Copy the code

Note the two more important ScaleImage and ReadBMP methods in the above code, where the ScaleImage method is to process the image into the format recommended by the Rainbow soft face engine, requiring the image width to be an integer multiple of 4.

public static ImageInfo ReadBMP(Image image) { ImageInfo imageInfo = new ImageInfo(); Image<Bgr, byte> my_Image = null; My_Image = new Image<Bgr, byte>(new Bitmap(Image)); imageInfo.format = ASF_ImagePixelFormat.ASVL_PAF_RGB24_B8G8R8; imageInfo.width = my_Image.Width; imageInfo.height = my_Image.Height; imageInfo.imgData = MemoryUtil.Malloc(my_Image.Bytes.Length); MemoryUtil.Copy(my_Image.Bytes, 0, imageInfo.imgData, my_Image.Bytes.Length); return imageInfo; } catch (Exception ex) { Console.WriteLine(ex.Message); } finally { if (my_Image ! = null) { my_Image.Dispose(); } } return null; }Copy the code

Note here that the memoryutil.malloc method is called in this method to allocate unmanaged memory, followed by the memoryutil.free () method.

The detection result is returned as ASF_MultiFaceInfo structure, in which faceRects is the result set of face and faceNum is the number of face. Through the following code, the location information of face recognition can be obtained

MRECT rect = MemoryUtil.PtrToStructure<MRECT>(multiFaceInfo.faceRects);
Copy the code

Sex and age testing

The FaceUtil class also provides methods for annual and performance checks. AgeEstimation and GenderEstimation operate in a process of applying for memory, invoking Native corresponding methods, and releasing memory.

public static ASF_AgeInfo AgeEstimation(IntPtr pEngine, ImageInfo imageInfo, ASF_MultiFaceInfo multiFaceInfo, out int retCode) { retCode = -1; IntPtr pMultiFaceInfo = MemoryUtil.Malloc(MemoryUtil.SizeOf<ASF_MultiFaceInfo>()); MemoryUtil.StructureToPtr(multiFaceInfo, pMultiFaceInfo); if (multiFaceInfo.faceNum == 0) { return new ASF_AgeInfo(); } retCode = asffunctions.asfProcess (pEngine, imageinfo.width, imageinfo.height, imageinfo.format, imageInfo.imgData, pMultiFaceInfo, FaceEngineMask.ASF_AGE); If (retCode == 0) {IntPtr pAgeInfo = Memoryutil. Malloc(Memoryutil.sizeof <ASF_AgeInfo>()); retCode = ASFFunctions.ASFGetAge(pEngine, pAgeInfo); Console.WriteLine("Get Age Result:" + retCode); ASF_AgeInfo ageInfo = MemoryUtil.PtrToStructure<ASF_AgeInfo>(pAgeInfo); // Free memory memoryutil. Free(pMultiFaceInfo); MemoryUtil.Free(pAgeInfo); return ageInfo; } else { return new ASF_AgeInfo(); }}Copy the code

It is important to note that to use gender and age testing, must be in face detection SDK initialization when open the corresponding function, that is to say combinedMask values must contain FFaceEngineMask. ASF_AGE | FaceEngineMask. ASF_GENDER;

Get characteristic information from photos

The previous step to obtain the face frame, you can call the face recognition engine to obtain the face feature information, the photo information into the face recognition engine, return the face model information

IntPtr pFaceModel = ExtractFeature(pEngine, imageInfo, multiFaceInfo, out singleFaceInfo);
Copy the code

Let’s take a look at ExtractFeature method, here the Demo write more complex, and several methods are the same name of the method, we will analyze it in detail

IntPtr ExtractFeature(IntPtr pEngine, Image Image, out ASF_SingleFaceInfo singleFaceInfo)

Because the first step of face recognition is to detect the position of the face frame, so this method is to conduct a pre-processing analysis of the incoming picture, and call the method of face detection to detect the face.

. Other code, mainly on the incoming image analysis, conversion size, if empty or the image is illegal, directly return empty features. ASF_MultiFaceInfo multiFaceInfo = DetectFace(pEngine, imageInfo); singleFaceInfo = new ASF_SingleFaceInfo(); IntPtr pFaceModel = ExtractFeature(pEngine, imageInfo, multiFaceInfo, out singleFaceInfo); return pFaceModel;Copy the code

IntPtr ExtractFeature(IntPtr pEngine, ImageInfo ImageInfo, ASF_MultiFaceInfo multiFaceInfo, Out ASF_SingleFaceInfo singleFaceInfo),

public static IntPtr ExtractFeature(IntPtr pEngine, ImageInfo imageInfo, ASF_MultiFaceInfo multiFaceInfo, Out ASF_SingleFaceInfo singleFaceInfo) {/* Define a single face information structure to return */ singleFaceInfo = new ASF_SingleFaceInfo(); EmptyFeature = new ASF_FaceFeature(); /* If (multifaceinfo. faceRects == null) {ASF_FaceFeature emptyFeature = new ASF_FaceFeature(); IntPtr pEmptyFeature = MemoryUtil.Malloc(MemoryUtil.SizeOf<ASF_FaceFeature>()); MemoryUtil.StructureToPtr(emptyFeature, pEmptyFeature); return pEmptyFeature; } / * will FaceDetect face box and the face Angle in the assignment to the out object * / singleFaceInfo faceRect = MemoryUtil. PtrToStructure < MRECT > (multiFaceInfo. FaceRects);  singleFaceInfo.faceOrient = MemoryUtil.PtrToStructure<int>(multiFaceInfo.faceOrients); /* Convert a single face object to an unmanaged structure */ IntPtr pSingleFaceInfo = memoryutil.malloc (memoryutil.sizeof <ASF_SingleFaceInfo>()); MemoryUtil.StructureToPtr(singleFaceInfo, pSingleFaceInfo); IntPtr pFaceFeature = MemoryUtil.Malloc(MemoryUtil.SizeOf<ASF_FaceFeature>()); Face recognition / * call interface to extract face feature * / int retCode = ASFFunctions. ASFFaceFeatureExtract (pEngine, imageInfo. Width, imageInfo. Height, imageInfo.format, imageInfo.imgData, pSingleFaceInfo, pFaceFeature); Console.WriteLine("FR Extract Feature result:" + retCode); if (retCode ! */ memoryutil. Free(pSingleFaceInfo); MemoryUtil.Free(pFaceFeature); ASF_FaceFeature emptyFeature = new ASF_FaceFeature(); IntPtr pEmptyFeature = MemoryUtil.Malloc(MemoryUtil.SizeOf<ASF_FaceFeature>()); MemoryUtil.StructureToPtr(emptyFeature, pEmptyFeature); return pEmptyFeature; } / / process the return value, there is a pile of interoperability access ASF_FaceFeature faceFeature = MemoryUtil. PtrToStructure < ASF_FaceFeature > (pFaceFeature); byte[] feature = new byte[faceFeature.featureSize]; MemoryUtil.Copy(faceFeature.feature, feature, 0, faceFeature.featureSize); ASF_FaceFeature localFeature = new ASF_FaceFeature(); localFeature.feature = MemoryUtil.Malloc(feature.Length); MemoryUtil.Copy(feature, 0, localFeature.feature, feature.Length); localFeature.featureSize = feature.Length; IntPtr pLocalFeature = MemoryUtil.Malloc(MemoryUtil.SizeOf<ASF_FaceFeature>()); MemoryUtil.StructureToPtr(localFeature, pLocalFeature); // Finally, don't forget to Free memoryutil. Free(pSingleFaceInfo); MemoryUtil.Free(pFaceFeature); /* return pLocalFeature; }Copy the code

Face to retrieve

In face retrieval, local face material library should be established first. The face features extracted in the previous step are a string of binary data. In actual use, we can store the features in the database or local files.

When human faces retrieval, access to the facial features to retrieve, call ASFFunctions. ASFFaceFeatureCompare method can complete the retrieval.

for (int i = 0; i < imagesFeatureList.Count; i++) { IntPtr feature = imagesFeatureList[i]; float similarity = 0f; int ret = ASFFunctions.ASFFaceFeatureCompare(pImageEngine, image1Feature, feature, ref similarity); If (content.toString ().indexof ("E") > -1) {similarity = 0f; } AppendText(string.Format(" matching with {0} :{1}\r\n", I, similarity)); Imagelist.items [I].Text = string.Format("{0} iD ({1})", I, similarity); imagelist.items [I]. if (similarity > compareSimilarity) { compareSimilarity = similarity; compareNum = i; }}Copy the code

ASFFunctions. ASFFaceFeatureCompare method actually calls the corresponding method is the SDK, the return value simiarity for similarity. Demo will get the face and all the faces in the face database have been compared to find the most close to a feature. In practice, if we find a feature that meets our confidence level requirements, we can simply exit the loop.

Tip: In practical applications, if the base of the face database is large, multiple FR instances can be opened for retrieval, and gender and age data in face detection can also be enabled to narrow the scope of query

Detect faces from video

If face detection from photos is the basis of face recognition, then face detection from video is the most practical application of face recognition, the actual real-time face detection system is based on video detection, in vivo detection is also based on video mode of face detection. To put it simply, the way to detect a face from a video is to capture the frame containing the face from the video and analyze the recognition process. This process is described in detail in the videoSource_Paint method of FaceForm.

Grab a frame from the camera (RGB camera) call the face detection engine suitable for video, detect the face according to the returned face position to draw a face frame, The detection result of the last frame is used to identify the recognized face information. According to the returned face information, the living body detection function is called to determine that it is living. The face features are extracted by the human face recognition engine, and the face features are matched with the information of the face database, and the recording results wait for the capture of the next frameCopy the code

The first point to note is pVideoEngine

ASF_MultiFaceInfo multiFaceInfo = FaceUtil.DetectFace(pVideoEngine, bitmap); The pVideoEngine is first and foremost a face detection engine in video mode, which is enabled in the InitEngines() method

  uint detectModeVideo = DetectionMode.ASF_DETECT_MODE_VIDEO;
        int combinedMaskVideo = FaceEngineMask.ASF_FACE_DETECT | 
        FaceEngineMask.ASF_FACERECOGNITION;
        retCode = ASFFunctions.ASFInitEngine(detectModeVideo,
        videoDetectFaceOrientPriority, detectFaceScaleVal, 
        detectFaceMaxNum, combinedMaskVideo, ref pVideoEngine);
Copy the code

PVideoEngine is called here, and pImageEngine based on images uses a different engine. The difference between the two engines is that pVideoEngine uses video mode during initialization. In the case of video detection, video mode is recommended. Image mode is recommended. Here because of the video (camera) case, there are 25-30 frames of data per second, due to the different algorithm implementation, can only do about 20 times of image mode detection per second, so in the case of video, image mode to do face detection is not suitable, because the calculation power is not enough; But the video mode can run 100 times per second, and in version 2.2, the video mode also adds a TrackID parameter output to make it easier to identify the same person. In the case of single image detection, image mode is generally used to do face detection, image mode detection is more detailed, and the support for multiple faces and large pictures is good, because it is single image detection, the performance of picture mode is not a problem, generally within 1s to do 5-10 to meet the requirements of the product.

One thing to note here, as pointed out in the Demo, is that you must ensure that only one frame is detected at a time, not multiple frames at the same time, so that the page will not appear in the display. In addition, it is time-consuming to extract eigenvalues and compare information, and another thread needs to be opened to avoid main thread interface lag.

Detection of live

Ways and means of living access

Currently, there are two commonly used liveness detection algorithms: interactive liveness detection and non-interactive liveness detection. When we open our mouth and shake our head when logging in alipay, it is interactive. If we do not need other actions, it is non-interactive. Two non-interactive algorithms based on RGB camera and infrared depth camera are provided.

RGB living

Only monocular RGB camera can complete the hardware setup with low cost, silent recognition does not need user action coordination, only ordinary camera can be used. High degree of humanization, wide range of application scenarios

IR living

Through the infrared imaging principle (screen can not image, different material reflectance is different, etc.) and deep learning algorithm, high robustness of living body judgment, silent recognition, can effectively defend against pictures, videos, screens, masks and other attacks, can meet the application of binocular face recognition terminal products in living body detection

Has a simple judgment, if your camera is only one color camera, no IR infrared camera, so you can use the RGB living, if you keep your eyes is the camera, and one is infrared, so you can use the IR living, from the credibility, the credibility of the IR living more, but need special equipment.

Rainbow Software SDK only provides RGB liveness detection function in version 2.1. If you need IR liveness detection function, you need to use version 2.2 SDK.

RGB live interface parsing

Hongsoft’s liveness detection is built into the FR(facial recognition) engine. To use the liveness detection feature, you must first enable it. In the InitEngines() method of Demo, you can see how to initialize the FR engine.

Initialize the

Depending on the camera, different liveness detection engines are enabled. The following code enables the RGB mode FR engine

DetectFaceMaxNum = 1; combinedMask = FaceEngineMask.ASF_FACE_DETECT | FaceEngineMask.ASF_FACERECOGNITION | FaceEngineMask.ASF_LIVENESS; retCode = ASFFunctions.ASFInitEngine(detectMode, imageDetectFaceOrientPriority, detectFaceScaleVal, detectFaceMaxNum, combinedMask, ref pVideoRGBImageEngine);Copy the code

Faceenginemask. asF_LIP is common RGB live veness. In infrared bijection mode, it is FaceengInemask. asF_IR_LIP

Test whether it is alive

The best time to do face detection is before analyzing the features of the face, after the capture of the face frame, this time just need to call faceUtil LivenessInfo_RGB method. Whether isLive in the returned liveInfo is a live check. LivenessInfo_RGB internally calls the SDK’s ASFProcess method

retCode = ASFFunctions.ASFProcess(pEngine, imageInfo.width, imageInfo.height, imageInfo.format, imageInfo.imgData, pMultiFaceInfo, FaceEngineMask.ASF_LIVENESS); If (retCode == 0) {IntPtr pLivenessInfo = memoryutil. Malloc(memoryutil.sizeof <ASF_LivenessInfo>()); retCode = ASFFunctions.ASFGetLivenessScore(pEngine, pLivenessInfo); Console.WriteLine("Get Liveness Result:" + retCode); ASF_LivenessInfo livenessInfo = MemoryUtil.PtrToStructure<ASF_LivenessInfo>(pLivenessInfo); // Free memory memoryutil. Free(pMultiFaceInfo); MemoryUtil.Free(pLivenessInfo); return livenessInfo; }Copy the code

Its return value livenessInfo isLive defines the result of live face, when it is 1 is living, when it is -1 is the prosthetic, the program is judged as the prosthetic, no longer feature extraction and matching action.

IR in vivo interface parsing

To be supplemented…

Problems and solutions to share

Management of unmanaged memory and memory overflow

In C# programs, we often work with managed memory and often use new to create objects. However, the SDK provided by Hongsoft is based on native code and written in C language, which requires the allocation and use of unmanaged memory and uses the structure type of C for parameters. For convenience, the Demo program provides MemoryUtil class, which encapsulates the corresponding methods of Marshal class. Provides a convenient way to call C methods directly.

When writing your own programs using Demo code, note that some FaceUtils methods call the Malloc method to allocate memory, but do not free it, but free it in some other method. It is important to have an unmanaged memory management principle: Marshal.AllocHGlobal must be called Marshal.FreeHGlobal(PTR) to manually free memory, even if gC.collect () is called; Methods cannot be freed, resulting in a memory leak.

The Marshal class is the most important class that needs to be used in.NET interop access. It provides a collection of methods for allocating unmanaged memory, copying blocks of unmanaged memory, converting managed types to unmanaged types, and other miscellaneous methods for interacting with unmanaged code. For details, please refer to the MSDN documentation docs.microsoft.com/zh-cn/dotne…

Can’t find the DLL

Different versions of VS have certain requirements for the placement of DLLS, and the CPU type of the program will also affect the final use of DLLS. Generally speaking, if you are using a 32-bit program, put it in the x86 folder, and if you are using an X64 folder, put it in the 64 folder.

Activation failed

The SDK of version 2.2 requires automatic networking activation at the first use, so please connect to the Internet at the first use. If the 90118 device does not match during startup, the hardware information has changed. In this case, you only need to delete the arcface32.dat or arcface64. dat file in the SDK directory. The SDK will automatically activate the file if it cannot detect it.

Can the code in the Demo be used in WPF or Asp.net

This is certainly possible, we can according to the official Demo according to their own business logic into asp.net application or WPF application. Most of the functionality has been wrapped up in Demo, and once you understand the business logic, you can use the methods in FaceUtil directly. However, when using WPF or asp.net, you may run out of stack memory because the default stack size for.net is 256K or less, while the SDK requires 512KB or more. Just adjust the stack size when creating a new thread, as shown below

new Thread(new ThreadStart(delegate {
        ASF_MultiFaceInfo multiFaceInfo = FaceUtil.DetectFace(pEngine, imageInfo);
    }), 1024 * 512).Start();
Copy the code

More questions and support

Rainbow soft open platform BBS provides the official information exchange platform, can visit ai.arcsoft.com.cn/bbs/index.p… For more information, there are technicians on hand to solve your problems. If you have a good Demo that you need to share with other partners, you can also upload your work in the forum.