Not envy mandarin duck not envy fairy, a line of code half a day. Original: Taste of Little Sister (wechat official ID: XjjDog), welcome to share, please reserve the source.

Before introducing the Arsenal of performance optimization, a few digressions. Hopefully these digressions will dispel your desire for performance excellence and make you settle for being a slick nail.

I don’t recommend programmers to optimize their business performance. I say this purely for my own safety. Because performance optimization is, in most companies, a thankless task.

The pursuit of minimalist code, high-performance code, is the goal of ambitious programmers. But as I have worked in companies large and small, I have found that many excellent programmers are suffering from this kind of obsession.

Here are a few reasons why we know it must be wrong, but there’s nothing we can do about it:

  1. The company evaluates programmers according to their completed features. Performance optimization is the existence of extra time, which is a kind of wasted cost.
  2. The team doesn’t Care about the efficiency of the application. It doesn’t matter if it is slow, but it can put out the fire when something goes wrong.
  3. Performance tuning is risky and often involves tweaking the code structure or even modifying the code logic. If you don’t optimize, it’s okay. If you optimize, it’s okay. Nobody wants to touch it.

In one sentence, the team is in deep troubleduringMire, noLooking forward to sexThought, we are so paste, muddle along.


This is the current situation of many companies, especially small and medium-sized companies. In this kind of company, UNLESS the system is extremely slow, optimized to work, or your boss asks you to do it, I don’t recommend it. If you do, don’t regret getting burned.

Of course, there are a lot of teams that have a nice technical atmosphere, and even in code review, will come up with some better suggestions. When you meet a team like this, you cherish it. Our Arsenal is for these teams, no doubt.

Uncle Brendan D. Gregg, who has written the book “Top of The Sex.” But the book goes a little deeper, and a lot of the tools are analyzed at the resource level. One of his better known diagrams is this collection of tools. And, of course, the now-ubiquitous fire image.

All these tools are low-level, and most of the time, we don’t use them. So XJjDog put together a slightly upper tier, closer to the usual Arsenal of usage. Most tools, which you don’t need to install with YUM or APT, come with you.

Common problem discovery

This is a collection of performance troubleshooting tools that xJJDog uses to troubleshoot common performance issues such as CPU, memory, network, I/O, etc.

Most surveillance systems capture these indicators as well. Because the performance data is generated at a certain time, you can only check the performance data when the problem occurs. It is highly recommended that you use a monitoring system to store this historical data so that you can follow problems instead of waiting for them to recur.

Special tools

However, no matter how you look at it, the above tools are too scattered, the cost of learning is relatively large, but also need to memorize a lot of parameter configuration, to be responsive in the fire fighting environment.

Fortunately, there are tools to get a performance overview that can help reduce brain cell loss.

Take top for example, experienced students can judge whether the resources in the system have reached the bottleneck with only this command. Tools like vmstat, SAR, nmon, etc. Nmon, in particular, is an old performance summary tool that automatically generates performance reports generated during pressure tests.

The learning curve is a bit steep for a few of the performance mining tools, but once mastered you’ll feel in control. But a lot of times it’s not their turn to play, because there’s always a sense of cannon hitting mosquitoes.

Java specific tools

But don’t forget, we are Java people, and most of the performance problems we encounter are at the JVM level. Perf, for example, is a powerful tool that can track the number and duration of every C function call, but it is useless for Java, and the fire chart it generates is useless, too.

So Java has a solution of its own.

More than that, it’s more powerful. In particular, I recommend the JFR functionality integrated with JMC, which is extremely detailed.

Arthas is a great tool for debugging call times in a standalone environment, and I have tuned performance more than once using its trace command. In a distributed environment, a distributed call chain tool like SkyWalking can also help.

END

Some might think that the tools we have listed above are too many to learn, but this is not the case.

When you run into a performance problem and try to figure out the specific cause of it, you hate having too few tools and wish for something more powerful.

Books to time square hate little, you never know until you have gone through.

Our job as engineers is to choose the right combination of these tools to hit the jackhammer.

Of course, if you don’t do it right, the consequences are not so good. Referring to the digression at the beginning of this article, never rush to do such a thankless task. Performance optimization is a double-edged sword. It can take you up or it can take you down. It’s all about the team you’re on.

Xjjdog is a public account that doesn’t allow programmers to get sidetracked. Focus on infrastructure and Linux. Ten years architecture, ten billion daily flow, and you discuss the world of high concurrency, give you a different taste.