Moment For Technology

Deploy ChripStack on top of Kubernetes with Helm Chart

Posted on Sept. 23, 2022, 9:44 a.m. by Amira Goda
Category: The back-end Tag: kubernetes

chirpstack-helm-chart

Open source Lorawan Server project ChirpStack helm Chart, project source

How to contribute

If you want to contribute to the project, click the fork button to fork the project and initiate a PR

Fork

Preparation for contribution :Fork this project.

Contribution process

$ git remote add chirpstack-helm-chart git@github.com:liangyuanpeng/chirpstack-helm-chart.git
# sync with the remote master
$ git checkout master
$ git fetch chirpstack-helm-chart
$ git rebase chirpstack-helm-chart/master
$ git push origin master
# create a PR branch
$ git checkout -b your_branch   
# do something
$ git add [your change files]
$ git commit -sm "xxx"
$ git push origin your_branch  
Copy the code

Install the helm chart

$ git clone https://github.com/liangyuanpeng/chirpstack-helm-chart.git  
$ cd chirpstack-helm-chart/  
# install helm chart from this repo
$ helm install chirpstack .   
Copy the code

Note: The storageClass named Longorn is used by default.

$ kubectl get po 
After executing the command, you can see the following POD
NAME                              READY   STATUS    RESTARTS   AGE
chirpstack-as-84b68cb7fd-zgs5j    1/1     Running   0          45s
chirpstack-ns-7d9b9867f-zftn6     1/1     Running   0          45s
mosquitto-0                       1/1     Running   0          45s
pgsql-0                           1/1     Running   0          45s
redis-0                           1/1     Running   0          45s
redis-exporter-64f8bf4f46-2rcgl   1/1     Running   0          45s
Copy the code
$ kubectl get svc
After executing the command, you can see the following SVCThe NAME TYPE CLUSTER - EXTERNAL IP - the IP PORT (S) AGE chirpstack - as ClusterIP 10.98.227.61  none  8080 / TCP, 8001 / TCP, 8003 / TCP 77 S Chirpstack-ns ClusterIP 10.108.182.238  None  8000/TCP 77s mosquitIP 10.104.149.103  None  1883/TCP 77s PGSQL 10.102.33.231 none 5432/TCP 77s redis ClusterIP 10.109.138.95 none 6379/TCP 77s redIS - ClusterIP 10.106.66.131  none  9121 / TCP 77 sCopy the code
$ kubectl get pvc
After executing the command, you can see the following PVC
NAME                               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pgsql-pvc-pgsql-0                  Bound    pvc-c1c6adf4-32ef-4431-bd6a-3825a6ef408c   96Mi       RWO            longhorn       3d
redis-pvc-redis-0                  Bound    pvc-e464d0e8-e04a-4958-858e-5efef1aeba9c   48Mi       RWO            longhorn       3d
Copy the code
$ helm list
After executing the command, you can see the following chartNAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION ChirpStack Default 1 2021-01-29 16:11:48.984574857 +0800 CST Deployed chirpstack - helm - chart - 0.1.0 from 1.16.0Copy the code

Expose the SVC of application-server and access application-server

Kubectl port-forward SVC /chirpstack-as 8080:8080 --address 0.0.0.0Copy the code

Set network-server on application-server

You can set it to chirpstack-ns.{namespace}:8000 or chirpstack-ns.{namespace}.svc.cluster.local:8000

Here {namespace} is replaced with the real namespace

If you use the gateway-Bridge component, you can expose the SVC with the following command

kubectl expose deploy gateway-bridge --port 1700 --target-port=1700 --protocol=UDP --name udpservice --type=NodePort  
Copy the code

conclusion

So far, ChirpStack has been deployed on K8S. The default service is to create SVC but not further exposed. The specific exposure mode is left to the user to choose, and the port of AS can be exposed to access AS. The same is true for MQTT services and gateway-Bridge services, where data needs to be uploaded to the server and the service needs to be exposed.

Search
About
mo4tech.com (Moment For Technology) is a global community with thousands techies from across the global hang out!Passionate technologists, be it gadget freaks, tech enthusiasts, coders, technopreneurs, or CIOs, you would find them all here.