After migrating from Docker to Docker Swarm to Kubernetes and then dealing with all the various API changes over the years, I am more than happy to find problems in deployment and fix them.

Today I’m going to share five troubleshooting tips that I think are the most useful, as well as a few others.

Kubectl — “Swiss Army Knife”

Kubectl is our Swiss Army knife. We often use them when something goes wrong. How we use them is important when something goes wrong.

The situation will be: my YAML has been accepted, but my service is not started and started, but is not working properly.

1. The kubectl get Deployment/Pods command is so important because it displays useful information without having to display a lot of content. If you want to use deployment for your workload, you have two options:

Kubectl get deploy kubectl get deploy -n namespace kubectl get deploy -- all-namespacesCopy the code

Ideally, you want to see 1/1 or the equivalent of 2/2, and so on. This indicates that your deployment has been accepted and attempted.

Next, you might want to look at Kubectl Get Pod to see if the deployed backup pod started up correctly.

I’m surprised how often I have to explain this trick to people who have problems with Kubectl get Events. This command prints out events in a given namespace, which is ideal for finding critical problems, such as a crashed POD or an inability to pull container images.

The logs in Kubernetes are “unsorted,” so you will need to add the following, which is taken from the OpenFaaS documentation.

$kubectl get events — sort – by =. Metadata. CreationTimestamp kubectl get event is another close command is kubectl the describe, Just like get deploy/pod, it works with the object’s name:

Kubectl describe deploy/figlet -n OpenFAas you will get very detailed information here. You can describe most things, including nodes that will show that pods cannot be started due to resource constraints or other problems.

Kubectl logs this command must be used a lot, but many people use it the wrong way.

If you deploy, say, cert-Manager in the cert-Manager namespace, many people think they first have to find the long (unique) name of the Pod and use it as a parameter. Not right.

kubectl logs deploy/cert-manager -n cert-manager
Copy the code

To track logs, add -f

kubectl logs deploy/cert-manager -n cert-manager -f
Copy the code

You can combine all three.

If your Deployment or Pod has any tags, you can attach them to one or more logs matching pods using -l app = name or any other set of tags.

Kubectl logs -l app=nginx there are tools, such as Stern and Kail, that can help you match patterns and save some typing, but I find them distracting.

You will soon need it when you start using YAML generated by another project or another tool such as Helm. It’s also useful to check the version of the image in production or the comments you set up somewhere.

kubectl run nginx-1 --image=nginx --port=80 --restart=Always
Copy the code

Output yaml

Kubectl get deploy/ nginx-1-o yaml now we know Furthermore, we can add -export and save YAML locally to edit and apply again.

Another option for real-time editing YAML is Kubectl Edit. If you are confused about Vim and don’t know how to use it, use this simplified editor by prefacing the command with VISUAL = nano.

5. Kubectl Scale Do you turn it on and off? Kubectl scale can be used to shrink Deployment and its PODS to zero copies, effectively killing all copies. When you scale it back to 1/1, a new Pod is created and your application is restarted. The syntax is simple enough that you can restart the code and test it again.

kubectl scale deploy/nginx-1 --replicas=0
kubectl scale deploy/nginx-1 --replicas=1
Copy the code

We need this trick. Port forwarding via Kubectl allows us to expose a service on a local or remote cluster on our own computers so that it can be accessed on any configured Port without exposing it on the Internet.

Here is an example of accessing an Nginx deployment locally:

kubectl port-forward deploy/nginx-1 8080:80
Copy the code

Some people think that this only applies to deployment or Pod, which is wrong. Inter-service is fair and is usually a forwarding choice because they will mimic the configuration in a production cluster.

If you do want to expose the service on the Internet, you would normally use the LoadBalancer service, or run kubectl exposure:

kubectl expose deployment nginx-1 --port=80 --type=LoadBalancer
Copy the code

With that out of the way, you can try it out now. I hope you found these six commands and tips useful, and now you can test them on a real cluster.