I remember when I first came to the company, when the online user reported a problem and needed us to check the log, the boss told us to use ES system to check, and then lost a link, open the link, boy, there is such a big Kibana logo, I still thought, why does the company call it ES instead of Kibana?

Es is Elasticsearch, which is a real-time distributed search analysis engine. Checking logs is just a search process. Kibana is only responsible for page display, so we call it function oriented.

Kafka +ELK (Elasticsearch+ LogStash + Kibana)

process

App a PV data request is first forwarded by Nginx -> Kafka cluster message cache queue -> Logstash data filtering -> ES storage provides data search -> Kibana Web display.

A brief overview of the frameworks involved:

Nginx

Nginx can act as either an HTTP server or a reverse proxy server. I first used it when I created a UI component display page for Flutter Web at my company and deployed it on Nginx as a server.

The word “load balancing” comes to mind. For example, if you have 3 servers providing resources, you want each server to be accessed at the same rate, so that some servers are not too busy and others are not too busy. So you need someone in the middle to give them that task, to manage them. A request goes to the Nginx server, and Nginx assigns tasks to them, basically. In Java, thread pools are often used. If there are three threads executing tasks, there must be one thread that is responsible for scheduling a new task and who executes it, so that each thread has something to do and the allocation is even.

So there’s another word involved in this access process, reverse proxy. Let’s talk about forward proxy, we use VPN to get over the wall and this is the forward proxy, the proxy server proxy that you go and help you access the address that you want, the proxy source access the target. The client only knows the address of the Nginx server, and does not know which server returns the content to you.

Kafka

Kafka is a publish/subscribe message queue. Kafka uses the concept of a topic externally, where producers write messages to a topic and consumer groups read messages from a topic.

  1. Producers PUSH messages to the Kafka cluster, and consumer groups PULL messages from the Kafka set with long links.
  2. The Kafka cluster has the ability to persist messages
  3. Consumers exist as consumer groups, and the same topicA message is consumed at most once by the entire consumer group.

After seeing the concept of this topic, I finally understood a communication framework between Native and Flutte terminals developed by the company. I saw that this component was called TopicCenter. Once difficult to understand, topic center?? Since I saw kafka’s design, I can understand his design ideas. Each data type is a topic, such as String is a topic, int is a topic, and the listener is a Stream.

Redis

Kafka, contrast that with Redis. Kafka is stored on hard disk, whereas Redis Queue data is mainly stored in memory. Redis stores data across columns in a hash of keys. Publishing subscriptions in Redis are still made up of three parts: publishers (producers), channels (similar to topics), and subscribers (consumers).



Before learning data structure hop table, there is said that Redis is the bottom of the hop table implementation, using the idea of binary search.

CS architecture

Server-client, that is, client-server (C/S) structure. The C/S structure usually takes two layers. The server is responsible for data management, and the client is responsible for completing the task of interaction with the user. Broadly speaking, the end of the service provided can be called a Server. It does not necessarily mean that a host on a remote IP address is called a Server. With Android, the Binder mechanism, which is often asked in interviews, is the C/S architecture, AMS and WMS that provide services, which are all server-side. In addition, in our Android componentization, the service Provider Interface (SPI) mechanism is now the mainstream mechanism for modules to call each other. ModuleA exposes ServiceA to call ModuleB, which can also be understood as C/S architecture.

Flow control/congestion control

Flow control prevents the sender from sending too fast and depleting the receiver’s resources before the receiver can process it.

Congestion control prevents the sender from sending a message too fast for the network to handle congestion, which may degrade the performance of the whole network or even cause network communication services to stop.

These two concepts are also frequently asked in TCP/IP network interviews. Back pressure and buffer are the concepts that come to mind in Androd development, starting with Rxjava.

Back pressure: Manipulation of message blocking by the observed sending messages too fast for its operators or subscribers to process the relevant messages in time.

People say that with Kotlin coroutines, rXJava should be abandoned. There are buffers in coroutines, which correspond to the BUFFER policy in RxJava.

fun main() = runBlocking { var start = 0L val time = measureTimeMillis { (1.. 5) .asFlow() .onStart { start = System.currentTimeMillis() } .onEach { delay(100) println("Emit $it (${System.currentTimeMillis() - start}ms) ") } .buffer() .flowOn(Dispatchers.IO) .collect { println("Collect $it starts (${System.currentTimeMillis() - start}ms) ") delay(500) println("Collect $it ends (${System.currentTimeMillis() - start}ms) ") } } println("Cost $time ms") }Copy the code

TCP/IP to solve the flow control is mainly used in the sliding window,

Leetcode has a hard problem solving this sliding window problem