As more and more powerful new devices begin to support gpus, use cases for edge scenarios continue to expand across industries. As technology has developed, the edges have become larger and more efficient. With its industry-leading Gpus and leading processor IP technology provider ARM, NVIDIA is making significant innovations and investments in the edge ecosystem space. One example is the NVIDIA Jetson Nano, which is powerful but cheap, can run GPU-enabled workloads and handle AI/ML data processing tasks. In addition, cloud-native technologies such as Kubernetes enable developers to build lightweight applications for the edge using containers. In order to achieve a seamless cloud-native software experience across a diverse ecosystem of computing edges, Arm introduced saturn-bound Cassini project (https://www.arm.com/solutions/infrastructure/edge-computing/project-cassini) — based on the standard of open collaboration. It leverages the capabilities of these arm-based heterogeneous platforms to create a secure foundation for edge applications.

K3s was originally launched by Rancher Labs in early 2019 and became a CNCF sandbox project in August 2020. Currently, with over 16,000 Github Star, K3s has become a key choreography platform for small devices. As a Kubernetes distribution built specifically for the edge, it is lightweight enough not to strain device RAM and CPU. Using the Kubernetes device plug-in framework, workloads running on these devices can access GPU functionality efficiently.

In typical edge scenarios, edge devices are mainly used to collect data, and then analyze and decode the data in the cloud. But as edge devices become more powerful, we can now execute AI/ML processes directly on the edge.

In previous articles, we learned how efficient it is to deploy Rancher Kubernetes clusters in the cloud using gpus.

In this article, we’ll see how NVIDIA’s Jetson Nano can be combined with K3s and enable GPU capabilities on the edge, and we’ll end up with an awesome edge platform. The following diagram describes the overall architecture in the case:

Figure 1: Edge object detection and video analysisCopy the code

In the image above, you can see the camera on the edge connected to the Jetson Nano device. JetPack OS, a stack of GPU-enabled devices, is available on the NVIDIA Jetson Nano. In this setup, we have two video streams passed as input to the NVIDIA DeepStream container:

  • The live video feed from the camera is angled from the parking lot
  • The second video is a prefabricated video with different types of objects (cars, bikes, people, etc.).
  • We also deployed a K3s cluster on Jetson Nano that hosted NVIDIA DeepStream Pods. The device analyzes the video stream as it is passed to the DeepStream Pod.
  • The output is then passed to a display attached to the Jetson Nano.
  • On the display, we can see object categories — cars, people, etc.

configuration

Preparation: This tutorial requires the following components to be installed and configured:

  • Jetson Nano board
  • Jetson OS (Tegra)
  • Connects to the Jetson Nano’s display via HDMI
  • Connects via USB to the Jetson Nano’s webcam
  • Change the Docker runtime to the Nvidia runtime and install K3s

Jetson OS installs Docker right out of the box. We need to use the latest version of Docker because it is GPU-compatible. Use the following command to check the default runtime:

sudo docker info | grep Runtime
Copy the code

You can also check the Docker Daemon to see the current runtime:

cat /etc/docker/daemon.json
Copy the code

Now change the contents of the Docker Daemon to the following:

{
 "default-runtime": "nvidia",
​
"runtimes": {
​
"nvidia": {
 "path": "nvidia-container-runtime", "runtimeArgs": []
​
} }
​
}
Copy the code

After editing daemon.json, restart the Docker service. You should then be able to see Nvidia’s default runtime.

sudo systemctl restart docker
Copy the code
sudo docker info | grep Runtime
Copy the code

Before installing K3s, run the following command:

sudo apt update sudo apt upgrade -y sudo apt install curl
Copy the code

This will ensure that we are using the latest version.

To install K3s, use the following command:

The curl - sfL https://get.k3s.io/ | INSTALL_K3S_EXEC = "-- docker" sh - s -Copy the code

Run the following command to check the installed version:

sudo kubectl version
Copy the code

Now, let’s create a POD using the Deepstream SDK sample container and run the sample application.

Create a POD Manifest file using the text editor of your choice. Add the following to the file:

apiVersion: v1 kind: Pod metadata: name: demo-pod labels: name: demo-pod spec: hostNetwork: true containers: - name: Demo - stream image: NVCR. IO/nvidia/deepstream - l4t: 5.0-20.07 - samples securityContext: ring: true allowPrivilegeEscalation: true command: - sleep - "150000" workingDir: / opt/nvidia/volumeMounts deepstream/deepstream - 5.0: - mountPath: / TMP/X11 Unix/name: X11 - mountPath: /dev/video0 name: cam volumes: - name: x11 hostPath: path: /tmp/.X11-unix/ - name: cam hostPath: path: /dev/video0Copy the code

Create the pod using YAML MANIFEST from the previous step.

sudo kubectl apply -f pod.yaml
Copy the code

Pod uses the DeepStream-L4T: 5.0-20.07-samples Container, which needs to be pulled before the container can be launched.

Use the sudo kubectl get Pods command to check pod status. Please wait for it to run.

After deploying and running Pod, log in and unset the Display variable inside Pod using the following command:

sudo kubectl exec -ti demo-pod /bin/bash unset DISPLAY
Copy the code

The “unset DISPLAY” command should run in pod.

Enter the following command to run the sample application within pod:

deepstream-app -c / opt/nvidia/deepstream/deepstream - 5.0 / samples/configs/deepstream - app/source1_usb_dec_infer_resnet_int8. TXTCopy the code

The video stream may take a few minutes to get up and running.

Your video analytics application is now using the webcam input and providing real-time results on the display attached to the Jetson Nano Board. To exit the app, just press “Q” in Pod.

To exit pod, use the “exit” command.

For complete hardware details on Jetson Nano, run another Pod using the following command:

kubectl run -i -t nvidia --image=jitteam/devicequery --restart=Never
Copy the code

conclusion

As we saw above, the ARM-based NVIDIA Jetson Nano and K3s allow AI and data analysis to run seamlessly on the edge. These low-cost and powerful devices can be deployed quickly and provide an efficient way to perform video analysis and edge AI.

In the process of practice, if you have any questions, you are welcome to scan the QR code at the end of the article, add the little assistant as a friend, and enter the K3s official technical exchange group to communicate with you K3s users.

Author’s brief introduction

Pranay Bakre Arm principal Solution Engineer. He is keen to combine Arm’s Neoverse platform with cloud-native technologies such as Kubernetes and Docker. He enjoys working with partners to build solutions on arm-based cloud and edge products.