1. Zero copy: mmap and sendFile. 2. Sequential read and write. 3, PageCache. 4. Batch operation. 5. Data compression.
As requested by most partners, a kafka interlude before Yarn is a fun and relaxing experience. So the message system is what we call the repository above, which can act as a cache in the intermediate process and enable decoupling. To introduce a scenario, we know that China Mobile, China Unicom, China Telecom log processing, is outsourced to do big data analysis, assuming that their logs are now handed over to...
Kafka is a message queue used to process large amounts of data. It is usually used for log processing. Since it is a message queue, Kafka has the characteristics of a message queue. Currently Kafka is mainly used as a message store and a log store.
Kafka wraps the send message as a ProducerRecord object. The ProducerRecord object contains the target topic and the content to be sent, as well as specifying keys and partitions. Before sending the ProducerRecord object, the producer serializes the key and value objects into byte arrays so that they can be transmitted over the network. All out of...
In the process of learning Kafka found that many tutorials about the knowledge are out of date with the evolution of Kafka version, so the evolution of Kafka version to do a combing 0.8.x 0.8.0 kafka become Apache top project after the first
From a message flow from the producer sent, to the Kafka service storage, and then to the consumer consumption of these three modules analysis Kafka for the analysis of the mechanism of non-loss of messages, but also including the practice of some configuration parameters
How kafka does not lose messages? The Producer, Broker, and Consumer all need to do some work to ensure that messages are consumed. The server does not lose messages; consumption
This article has participated in the good article call order activity, click to see: back end, big front end double track submission, 20,000 yuan prize pool for you to challenge! . First, let's take a look at what Kafka is. In the architecture of big data, data collection and transmission is a very important link
This parameter is the address of the broker. You do not need to fill in all of them, as Kafka will retrieve information about other brokers from the current broker. However, in case a broker dies, multiple broker addresses are filled in and the message is processed before it is sent. This action occurs before the serializer or the partitioner. Kafka allows you to configure interceptors...
You know from the previous article that when a producer client sends a message, it goes through a series of modules, such as interceptors, serializers, and partiers, before the message is written to the cache. So today we will look at the design of kafka's producer client cache architecture.