Abstract:For the convenience of friends for video scene AI application development, Modelarts reasoning platform will be video reasoning process of some common extracted preset in the scene, in the base image friends simply write preprocessing and postprocessing script, can like picture type AI service development video type AI services.

This article is shared from Huawei Cloud Community “Video Reasoning on ModelArts Platform”, the original author: HW007.

Those who are familiar with ModelArts inference know that an AI service can be easily deployed on the ModelArts platform by simply customizing the preprocessing, reasoning and post-processing scripts of the model and reasoning on the input of pictures, text, audio and video. However, for the inference of video type, users were previously required to download and decode video files in their own scripts, and then transfer the processed files to OBS by themselves. For the convenience of friends for video scene AI application development, Modelarts reasoning platform will be video reasoning process of some common extracted preset in the scene, in the base image friends simply write preprocessing and postprocessing script, can like picture type AI service development video type AI services.

First, the overall design description

The general reasoning process for extracting video scenes is as follows:

As shown in the figure above, the flow of video processing scene can be divided into six parts: “video source input”, “video decoding”, “pre-processing”, “model reasoning”, “post-processing” and “inference result output”. ModelArts has prepared the three gray parts of “video source input”, “video decoding” and “inference result output” in advance. The three parts of “pre-processing”, “model reasoning” and “post-processing” can be customized by users freely. The specific customization methods are as follows:

1) Customize model: ModelArts has already provided a method to load the model. The user only needs to place his model in the format of “saved_model” into the specified model directory.

2) Customized pretreatment:ModelArts will provide the decoded video frame data to the user by overwriting the static method “_preprocess” of the “VideoService” class in “customize_service.py”. The “_preprocess” function has the following constraints on the incoming and outgoing parameters:

3) Customized post-processing:ModelArts will provide the inferred output of the model and the decoded video frame data to the user by overwriting the static method “_postprocess” of the “VideoService” class in “customize_service.py”. The incoming and outgoing parameters of the “_postprocess” function are as follows:

Second, Demo experience

1) Download the attachment of this article, as shown in the figure below. The attachment provides a folder “Model” of video reasoning model package that has been debugedOK, and also provides verification use cases written based on TOX framework for users to debug their model package offline.

2) Transfer the “Model” folder in the attachment package to Huawei Cloud OBS.

Put the folders “test/test_data/input” and “test/test_data/output” in the attachment package into the same directory as the previous “model” folder in Huawei Cloud OBS.

3) Import model: in ModelArts import model interface, select import from OBS, and select the model directory just uploaded to OBS. As shown in the figure below:

Follow the following steps to configure each configuration of the model and click Create Model:

You can see that the model was created successfully:

4). Deploy the service. Deploy the above model as an online service, and select the resource node with GPU in the deployment (both public pool and exclusive pool are acceptable) :

You can see that the service has been successfully deployed:

5) Create job: select Create job in the service interface

Select the input video and select the video file uploaded to the input folder in OBS in Step 2) as follows:

Select the output path and go to the Output folder uploaded to OBS in Step 2) as follows:

6) Waiting for the video processing to finish:

If you look at the Output folder in OBS, you can see the inference result after the video has been split into pictures.

7) Users can change the model file in the format of “saved_model” in the model folder according to their own needs, and modify the functions of “_preprocess” and “_postprocess” in “customize_service.py” to complete their own business logic. After modification, “test/run_test.sh” can be run first to verify whether the modified model package can be inferred normally in advance. After offline debugging, the model package can be submitted to OBS and deployed as ModelArts service by following the above steps after normal reasoning.

Wherein, the model package requirements of video reasoning are as follows:

Model package structure requirements:

└ ─ ─ model

Bass Exercises – config.json (required, ModelArts inference related configuration file)

Bass Exercises – customize_service.py (required, inference file)

Bass Exercises – saved_model.pb (Required, SavedModel format model file)

Sigma — variables (must, savedModel format)

├ ─ ─ the variables. The data – 00000 – of – 00001

└ ─ ─ the variables. The index

The config. Json file format to follow Modelarts specification, https://support.huaweicloud.c…

Currently, only TensorFlow’s “TF1.13-Python 3.7-GPU-Async” runtime supports video reasoning. That is, the “model_type” field in the config.json file must be “TensorFlow” and the “runtime” field must be “tf1.13-python3.7-GPU-async”.

The “customize_service.py” must have a “VideoService” class. The “VideoService” class must have two static methods “_preprocess” and “_postprocess”. The corresponding function signature constraints are as follows:



Click on the attention, the first time to understand Huawei cloud fresh technology ~