Moment For Technology

Internet first and second line company interview diary (get goods side)

Posted on Sept. 23, 2022, 9:50 a.m. by Lagan Datta
Category: The back-end Tag: The interview

To introduce myself

A: Introduce yourself based on the technical points


A: Briefly introduce your responsibilities and responsibilities


The cache

A: How do you design your cache?

B: Mainly memory cache. A: Tell me about your elimination strategy

B: LRU, used for guva cache

A: If you were asked to design an LRU, how would you do it?

B: I will design it based on LinkedHashMap, hash and linked list

A: Why do you use linked lists?

B: Because it is head or tail insertion, the performance of adding and deleting the linked list is higher, and only the neighboring nodes need to be changed

A: How do you update cached data when the data source changes?

B: We write the data source first and then clear the cache. We have a local data source on each server and we drop the data

A: What about data sources shared by multiple servers? How do you synchronize the memory cache on each machine?

B: Let me see...

B: I came up with a stupid solution that might not be optimal.

A: Tell me about it

B: Message queues are used to notify each server when data sources are updated. Publish and subscribe.

A: Well, that's more or less our plan

B: If this is the case, I might consider redesigning the cache. It doesn't feel like the optimal solution.

A: What do you do for overtime?

B: It is the timeout of the guVA cache. Our cache elimination is mainly through LRU. Our timeout is set very long, I have not studied the specific implementation.

A: What if you were to design it? What would you do?

B: See redis's timeout implementation, which is lazy. Check whether the timeout is timed out, and return null if the timeout is timed out, and use a background thread to periodically clean the timeout cache.

A: Well, we're lazy, too

Depots table

A: Let's talk about the sub-list. Do you have sub-list now?

B: No, our current system has very high latency requirements. Requests from the C side never land in the database. They are mostly memory based. Existing databases are also to B

A: Ok, let me build A scenario for you: now we have A user table, we do sharding with UID, but we want to do paging query with time field, do you have any solution?

B: I had a similar scenario when I was working on a game. We had a table of users by server, but there was a requirement that players on different servers could add friends to each other. Our approach is to extract a general table with fewer fields, and add friends by checking the general table when querying. I feel like I can use that. If that doesn't work, you just sweep the table and summarize it in memory.

A: Well, it's pretty much the master list now

A: The ids of our sub-tables are basically guaranteed to increase, but not when combined... (I didn't quite catch the question)

B: Skip it. I haven't thought of that yet

A: Ok, that's ok



(I can't quite remember the rest)

A: Ok. Do you have any questions for me?

B: Can you tell us something about your current projects and teams?


B: What are the difficulties in your current work? What was my main responsibility in the past?

A: Now the main pressure is still the pressure brought by the continuous record high number of users, and the requirement of system upgrade and iteration

B: Ok, that's about it

A: When would it be convenient for you to come back for an interview?

B: ...

About (Moment For Technology) is a global community with thousands techies from across the global hang out!Passionate technologists, be it gadget freaks, tech enthusiasts, coders, technopreneurs, or CIOs, you would find them all here.