Moment For Technology

SparkStreaming conjecture mechanism: Interview is asked what problems encountered, say this show level!

Background Liu will brush the big data development interview of Niuke.com in the evening recently, and always see a frequent interview question, that is, what problems have you encountered in the learning process? This is a difficult question to answer. If I'm too simple, will it make the interviewer think I'm too low? What should I say

Writing Book Notes on Digital Transformation - Day 13

Today, I was attending the opening ceremony of the graduate School. I took time to write two sentences... Encountered this problem, if I as party B, of course, is very happy, this is a clear business opportunity ah, hurry to the project, find out the budget of Party A, put forward a good plan is just. But I know this question is really hard to answer, because the question is consistent with understanding. By asking this question, he means that his understanding of digital transformation is still rudimentary, still...

Cloud native data warehouse solution -- 1

Source: author: Mingqi links: https://www.zhihu.com/question/20623931/answer/750367153 zhihu copyright owned by the author. Commercial reprint please contact the author for authorization, non

Big data containerization, head players taste sweet?

Big data demand heat, has always been the wave of this era. However, due to the complexity of big data systems, the industry once led to the death of all kinds of voices. Especially when MapR was acquired by HPE and Cloudera's stock continued to fall into the doghouse, those voices were amplified. In fact, the need for big data has always been there, but the traditional big data implementation system needs to consider rebuilding. And the container depends on it...

Spark - Broadcast variable & Accumulators

Typically, when a function is passed to a Spark operation (such as Map,reduce), it is executed on a remote cluster node, and it uses a copy of all the variables in the function. These variables are copied to all machines, and variables that have not been updated on the remote machine are passed back to the driver. Using common, read-write shared variables between tasks is inefficient. The benefits of broadcast variables do not...

hadoop

Hadoop architecture consists of distributed storage HDFS, distributed computing MapReduce, and resource scheduling engine Yarn. The historical evolution of Hadoop is as follows: When the system fails, original data is lost. HDFS editlog file editlog: in the editlog of NameNode node, the editlog records information about the HDF...

Kafka Factory interview questions

The default Kafka message body size is 1 MB. However, in our application scenario, it is common for a message to be larger than 1 MB. If Kafka is not configured. Producers can't push messages to Kafka or consumers can't consume messages in Kafka

Search
About
mo4tech.com (Moment For Technology) is a global community with thousands techies from across the global hang out!Passionate technologists, be it gadget freaks, tech enthusiasts, coders, technopreneurs, or CIOs, you would find them all here.