Moment For Technology

Why Is Kafka high performance

1. Zero copy: mmap and sendFile. 2. Sequential read and write. 3, PageCache. 4. Batch operation. 5. Data compression.

Interlude: The vernacular takes you to Kafka

As requested by most partners, a kafka interlude before Yarn is a fun and relaxing experience. So the message system is what we call the repository above, which can act as a cache in the intermediate process and enable decoupling. To introduce a scenario, we know that China Mobile, China Unicom, China Telecom log processing, is outsourced to do big data analysis, assuming that their logs are now handed over to...

Kafka series (3) -- Kafka producer detailed explanation

Kafka wraps the send message as a ProducerRecord object. The ProducerRecord object contains the target topic and the content to be sent, as well as specifying keys and partitions. Before sending the ProducerRecord object, the producer serializes the key and value objects into byte arrays so that they can be transmitted over the network. All out of...

Kafka version evolution

In the process of learning Kafka found that many tutorials about the knowledge are out of date with the evolution of Kafka version, so the evolution of Kafka version to do a combing 0.8.x 0.8.0 kafka become Apache top project after the first

How do you answer the kafka message loss question?

From a message flow from the producer sent, to the Kafka service storage, and then to the consumer consumption of these three modules analysis Kafka for the analysis of the mechanism of non-loss of messages, but also including the practice of some configuration parameters

How to choose between Kafka and Flume

This article has participated in the good article call order activity, click to see: back end, big front end double track submission, 20,000 yuan prize pool for you to challenge! . First, let's take a look at what Kafka is. In the architecture of big data, data collection and transmission is a very important link

Kafka series - 3.1, producer client basic use

This parameter is the address of the broker. You do not need to fill in all of them, as Kafka will retrieve information about other brokers from the current broker. However, in case a broker dies, multiple broker addresses are filled in and the message is processed before it is sent. This action occurs before the serializer or the partitioner. Kafka allows you to configure interceptors...

Introduction to the caching mechanism of the Kafka producer

You know from the previous article that when a producer client sends a message, it goes through a series of modules, such as interceptors, serializers, and partiers, before the message is written to the cache. So today we will look at the design of kafka's producer client cache architecture.

About (Moment For Technology) is a global community with thousands techies from across the global hang out!Passionate technologists, be it gadget freaks, tech enthusiasts, coders, technopreneurs, or CIOs, you would find them all here.