Last time, we talked about arrays and linked lists. They are linear lists. The essential difference between arrays and linked lists is whether memory is continuous. The time complexity of linked list access elements is O(n), but the memory requirements are low.

Today we’ll cover two other data structures in linear tables: stacks and queues.

Stack Stack

A stack is a linear list with only context, but operations on its elements are limited compared to arrays and linked lists. The stack only allows elements to be inserted and deleted at one end. Putting elements on the stack is called pushing, and taking elements out of the stack is called pushing. We can regard the stack as a badminton barrel with an open end. The process of data operation is that we put and take out badminton from the barrel. Of course, a stack is an abstract data structure that can be implemented using either arrays or linked lists.

Let’s look at the stack usage scenario.

Browser page forward and backward

We often use a browser to browse the web, where there are many hyperlinks to jump to the next page. Sometimes we want to be able to go back to the previous page, and then go back to the previous page, and then go back to the child page.

Using two stacks allows you to browse the web forward and backward at will.

We push the page we opened at the beginning onto stack A. If we click the link in the page to see the next page, we push the sub-page onto stack A. When we return, we push the child page off the stack of A and onto stack B. At this point we close the page and continue to move from a to B. If we click the forward button, we can go to stack B and unstack. That is, if we put successively opened web pages on stack A for maintenance, and backward interfaces on stack B for maintenance. Click the back button, query the stack A, there is a page out of the stack, put into the stack B, not back; Click the forward button, query stack B, there is a page out of the stack, not forward.

Function call stack

In almost every programming language, the concept of a function exists. One function can call another function, so how is the call between functions implemented? Also use stack.

In the case of Java, after we write Java code, we compile it into class bytecode and hand it to the Java virtual machine for execution. In the Java virtual machine, function calls are made using a stack, and the space occupied by each function on the stack is called the stack frame. The Java entry function is main, which is first pushed into the stack during execution. At this time, local variables, operand stack, dynamic connection, method return address and other contents of the function will be saved in the stack frame. When the main function calls a subfunction add, the VIRTUAL machine pushes the contents of the add function to the top of the stack. After the add function is executed, the virtual machine pushes the contents of the add function to the top of the stack. When the Add function is executed, the virtual machine pushes the contents of the Add function to the top of the stack.

In this example, we can see that the understanding of basic data structure can help us understand the implementation details of commonly used languages and frameworks, which is very helpful for our in-depth study of computer knowledge. In addition to the above two examples, the stack can also be used for expression evaluation, parenthesis matching, and so on, if you are interested.

Queue Queue

A queue, like a stack, is a linear table data structure with limited operation. The stack can only be inserted or deleted at one end, while the queue can be deleted at one end, called dequeuing, and inserted at the other end, called enqueuing.

Like stacks, queues can be implemented using arrays or linked lists. When we implement using arrays, we maintain two variables head and tail to identify the head and tail of the queue. When enqueued, elements are placed at the end of existing elements in the array, and tail is moved one bit behind. When exiting the queue, the first element is removed and the head is moved back one bit. We have a situation where head and tail drift backwards as we queue in and out of the array. When tail reaches the end of the array, there is no way to queue, but there is probably more space before head. This leads to wasted space. At this point, we can solve the problem by moving all data forward one bit each time we exit the queue, or by tail moving all data to the head of the array when we reach the end of the array, but frequent moves will waste performance.

This leads to the creation of circular queues. The key to a circular queue is to think of an array as a loop. When head and tail reach the end of the array, they can still return to the head position. The word “tail” is not the right word, but the word “rear” is better. In the picture below, the process of entering and leaving the team is the process of operating the head and rear positions. In this loop, we can constantly go around and change its position without this happening, freeing us from moving data frequently and reducing performance costs.

But our underlying implementation is still arrays, and there are two key issues:

  1. How do I handle the position of rear when it gets to the end of the array?
  2. How to judge whether the team is empty or full?

For the first problem we can solve the problem by using the remainder operation. For a normal queue, we’re just going to rear++, now we’re going to do rear = (rear + 1) % array.length-1. The same goes for Head, of course.

Head == 0 && tail == array.length to determine full queue, head == tail to determine empty queue. (tail + 1) % array.length == head; (tail + 1) % array.length == head;

Another type of queue is the blocking queue. Blocking queues is the addition of blocking operations to queues. When the queue is empty, fetching data from the queue will block; When the queue is full, inserting data into the queue blocks. As you can see, blocking queues follow the logic of the producer-consumer model.

Another common queue is the priority queue, that is, the element with higher priority is first out of the queue. The implementation of this queue involves knowledge related to complete binary tree, big top heap and small top heap, which will be covered later.

conclusion

We can see that stacks and queues are abstract data structures that define an operation rule and then apply it to the corresponding scenario. The operation rules of these two data structures are consistent with their application scenarios. When we learn algorithms and data structures, it is very important to understand their advantages and disadvantages, and then to understand their application scenarios. There is no best data structure and algorithm, only the data structure and algorithm that are more suitable for certain scenarios. k3u1fbpfcp/1218c5115a6c4b0bb96dccc7d8d5e58d~tplv-k3u1fbpfcp-zoom-1.image)

Scan the code to follow the wechat official account