The definition of the stack

A stack is a linear table that limits insert and delete operations to the end of the table

The end of the stack that allows insertion and deletion is called the top, the other end is called the bottom, and the stack that does not contain any data elements is called the empty stack. Stack is also called Last In Filrst Out (Last In Filrst Out) linear table, abbreviated LIFO structure.

First of all, it is a linear table, that is, the stack elements have a linear relationship, that is, before and after the stack. It’s just a special kind of linear list. Insert and delete operations are performed at the bottom of a linear table, which means the top of the stack, not the bottom.

Its special feature is that it restricts the insertion and deletion positions of the linear table, which is always performed at the top of the stack. This makes: the bottom of the stack is fixed, the most advanced stack only at the bottom of the stack.

Stack insertion operation, called push, also known as push, push. Similar to a bullet into a magazine, as shown in Figure 4-2-2.

The operation of deleting the stack is called making the stack, also called popping the stack. Like a bullet in a clip, as shown in Figure 4-2-3.

In and out of the stack change form

Now, I’m going to ask you, does this element on the most advanced stack have to be the last one? The answer is not necessarily, it depends. The stack limits where linear tables can be inserted and deleted, and does not limit when elements can be moved in and out. That is, if not all elements are pushed on the stack, the elements that are pushed in first can be pushed off the stack, as long as the top element is pushed off the stack.

For example, if we now have three integer numeric elements 1, 2, and 3 on the stack, what is the stack order?

Is it possible to exit the stack in order 312? The answer is definitely not. Because 3 went off the stack first, which means that 3 was on the stack, and since 3 is on the stack, which means that 1 and 2 have been on the stack, and at this point, 2 must be above 1, which is closer to the top of the stack, so it can only be 321, otherwise it doesn’t meet the requirement for 123 to be on the stack, So it’s not going to be 1 to 2 out of the stack.

As you can see from this simple example, there are five possible exit orders for just three elements, and the number of elements can vary even more. This point must be understood.

The abstract data type of the stack

For stack, theoretically linear table operation characteristics it has, but because of its particularity, so for it in the operation will be some changes. In particular, insert and delete operations, which we renamed push and pop, are easier to understand. Just think of it as the bullet in and out of a magazine. We call it in and out.

Since the stack itself is a linear table, we discussed sequential and chained storage of linear tables in the previous chapter, which also applies to stacks.

Stack of sequential storage structure and implementation

The sequential storage structure of stacks

The sequential storage of stacks is actually a simplification of the sequential storage of linear tables, which is called sequential stack for short. Linear tables are implemented using arrays, with one end of the subscript 0 as the bottom of the stack. Since the first element is at the bottom of the stack, it’s going to be the bottom of the stack.

Define a top variable to indicate the position of the top element in the array. The length of the storage stack must be smaller than StackSize. When there is one element in the stack, top is equal to 0. Therefore, the judgment condition of empty stack is usually set as top=-1.

If you now have a stack and StackSize is 5, the normal, empty, and full stacks are shown in Figure 4-4-2.

The sequential storage structure of a stack – by – stack operation

Stack sequential storage structure one – by – one stack operation

Neither involves any loop statements, so the time complexity is O(1).

The two stacks share space

Sequential storage of stacks is convenient because it allows only the top of the stack to move elements in and out, so there is no problem with moving elements when linear tables are inserted or deleted. However, it has a big defect, that is, it must be implemented to determine the size of the array storage space, in case of insufficient, you need to code means to expand the array capacity, very troublesome. For a stack, we can only be as thoughtful as possible and design arrays of appropriate size to handle, but for two stacks of the same type, we can make the most of their pre-allocated storage space to operate. If we have two stacks of the same type, and we create array space for each of them, it’s very likely that the first stack is full and overflows, while the other stack has a lot of free storage. What’s the point? It’s perfectly possible to store two stacks in an array, just with a little trickery. As shown in figure 4-5-1, the array has two endpoints, and two stacks have two stack bottoms. Let the bottom of one stack be the beginning of the array (subscript 0), and the bottom of the other stack be the end of the array (subscript n-1). In this way, if you add elements to each stack, the endpoints extend toward the middle.

The key idea is that they are at both ends of the array, moving towards the middle. Top1 and top2 are Pointers to the top of stacks 1 and 2, and you can imagine that as long as they don’t meet, the two stacks can be used all the time. From this, it can also be analyzed that when stack 1 is empty, it is top1=-1; And when top2 is equal to n, when stack 2 is empty, when is stack full? Think about the extreme case, if stack 2 is empty, and stack 1’s top1 equals n minus 1, then stack 1 is full. Conversely, when stack 1 is empty, stack 2 is full when top2 equals 0. But more often than not, which is what I just said, when the stacks meet, when the two Pointers are 1 apart, top + 1 == top2 is full.

For the push method with two stacks sharing space, in addition to inserting element value parameters, we also need a stackNumber parameter to determine whether it is stack 1 or stack 2.

Such data structures are usually used when the space requirements of two stacks are opposite, i.e. one stack grows as the other shrinks. In this way, it makes more sense to use two stacks shared storage method. Otherwise, both stacks keep growing, and they will soon overflow.

Of course, this is only a design trick for two stacks of the same data type, but if the stack is not of the same data type, this approach will not solve the problem better, but will make the problem more complex, be aware of this premise.

Stack chain storage structure and implementation

Stack chain storage structure

Chain storage structure of stack, referred to as chain stack.

If you think about it, the stack is just the top of the stack for insertion and deletion. Is the top of the stack at the top or the bottom of the list? Since the single-linked list has a header pointer, and the top of the stack pointer is also required, why not combine the two? It is better to put the top of the stack at the head of the single-linked list (see Figure 4-6-1). In addition, since the top of the stack is already in the header, the common headers in single-linked lists are meaningless. Usually, they are not needed for linked stacks.

For a stack, there is no such thing as a full stack, unless there is no memory available. If this happens, the computer operating system will crash, not overflow of the stack. But for an empty stack, the definition of the linked list is that the head pointer points to NULL, so the empty stack is actually when top=NULL.

Stack chain storage structure one – to – stack operation

For the chain push operation, assume that the new node with element value e is S, and top is the pointer to the top of the stack, as shown in FIG. 4-6-2. The code is as follows.

Stack chain storage structure one – by – one stack operation

Assuming that variable P is used to store the money top node to be deleted, move the top pointer down one bit and release P at last, as shown in FIG. 4-6-3.

The chain push and pop operations are very simple, without any cyclic operation, and the time complexity is O(1).

Compare sequential stacks and linked stacks, they are the same in time complexity, O(1). For spatial performance, sequential stack needs to determine a fixed length in advance, which may waste memory space, but its advantage is that it is convenient to locate access, while linked stack requires each element to have a pointer field, which also increases some memory overhead, but the stack length is unlimited. So the difference is the same as discussed in linear tables. If the stack changes unexpectedly, sometimes very small, sometimes very large, then it is better to use a linked stack, whereas if the changes are manageable, it is better to use a sequential stack.

The role of the stack

The introduction of the stack simplifies the problem of programming, divides the different levels of concern, makes the scope of thinking narrow, more focused on the core of the problem we want to solve. On the other hand, for example, arrays, because you have to distract attention to the details of the array subscript increase and decrease, but cover up the nature of the problem.

So now many high-level languages, such as ava, C# and so on, have the Stack structure encapsulation, you can not pay attention to its implementation details, you can directly use the Stack push and pop methods, very convenient.

The stack is applied recursively

Definition of recursion

A recursive function is a function that calls itself directly or indirectly through a series of call statements.

A queue is a linear list that allows only inserts at one end and deletes at the other. A queue is a First In First Out linear table, or FIFO for short. The end that allows insertion is called the tail, and the end that allows deletion is called the head.

Of course, the worst thing about writing a recursive program is getting stuck in an endless recursion, so every recursion definition must have at least one condition that the recursion stops, that is, it no longer references itself but returns a value to exit. For example, there will always be a recursion where I <2, so I can execute a return I statement instead of continuing the recursion.

The difference between generation selection and recursion is that iteration uses a loop structure while recursion uses a selection structure. Recursion makes the structure of the program clearer, more concise, and easier to understand, thus reducing the time to read the code. But a lot of recursive calls create copies of the function, which can consume a lot of time and memory. Generation selection does not require repeated function calls and extra memory. Therefore, we should choose different code implementation methods depending on the situation.

What’s the relationship between recursion and stack?

We have already seen how recursion executes the forward and backward phases of the house. The order in which a recursive process returns is the reverse of the order in which it came before. During the fallback process, some actions may be performed, including recovering some data that was stored along the way.

This way of storing some data and then later restoring it in reverse order for later use clearly fits a data structure like money, so it is not surprising that the compiler uses plants to implement recursion.

In simple terms, during the forward phase, local variables, parameter values, and return addresses of the function are pushed onto the stack for each level of recursion. In the fallback phase, local variables, parameter values, and return addresses at the top of the stack are popped out to return the rest of the executing code in the call hierarchy, restoring the state of the call.

Stack application – four operational expression evaluation

Suffix (inverse Polish) notation definition

Let’s take an example of what a suffixed representation of “9+ (3-1) X3+10-:-2″ would look like :”931-3*+102/+”. Such expressions are called suffixes because all symbols come after the number to be computed. Obviously, there are no parentheses here.

Result of postfix expression calculation

To explain the benefits of postfix expressions, let’s first look at how a computer can use postfix expressions to compute the final result 20. Rule: : Traverse each number and symbol of the expression from left to right. If it meets a number, it will enter a branch. If it meets a symbol, it will close the two numbers at the top of the orange and carry out the operation.

  1. Initialize an air function J which is used to enter and exit the number to be evaluated. As shown on the left of Figure 4-9-1
  2. The first three digits in the postfix expression are all numbers, so 9, 3, and 1 are pushed, as shown on the right side of figure 4-9-1.

  1. The next step is “-“, so 1 out of the stack is taken as the subtraction, 3 out of the stack is taken as the minuend, and 3-1 is calculated to get 2, then 2 is pushed onto the stack, as shown in the left picture of FIG. 4-9-2.
  2. The number 3 is then pushed, as shown on the right side of figure 4-9-2.
  3. This is followed by an “*”, which means that 3 and 2 are pushed off the stack, and 2 is multiplied by 3 to get 6, which is pushed onto the stack, as shown on the left side of figure 4-9-3.
  4. Below is the “+”, so 6 and 9 go off the stack, add 9 to 6 to get 15, push 15 onto the stack, as shown on the right side of figure 4-9-3.

  1. Then the numbers 10 and 2 are stacked, as shown on the left of figure 4-9-4.
  2. Next is the symbol “/”, so the 2 at the top of the stack is removed from the stack with 10, which is divided by 2 to get 5, which is pushed onto the stack, as shown on the right in figure 4-9-4.

  1. The last one is the symbol “+”, so 15 and 5 go off the stack and add together to get 20. Push 20 onto the stack, as shown on the left side of figure 4-9-5
  2. The result is 20 out of the stack and the stack becomes empty, as shown on the right side of Figure 4-9-5.

Infix expression to suffix expression (specific baidu)

Definition of queues

A queue is a linear list that allows only inserts at one end and deletes at the other. A queue is a First In First Out linear table, or FIFO for short. The end that allows insertion is called the tail, and the end that allows deletion is called the head.

The abstract data type of the queue

Similarly, the queue also has various operations similar to linear tables, except that data can only be inserted at the end of the queue, and data can only be deleted at the head of the queue.

Circular queue

A linear table has sequential storage and chain storage, and a stack is a linear table, so there are two types of storage. Similarly, queue, as a special linear table, also has these two storage modes. Let’s look at the sequential storage structure of queues

Insufficient queue sequential storage

Let’s assume that a queue has N elements, then the sequential storage queue needs to build an array larger than N, and store all the elements of the queue in the first n cells of the array, the end of the array with subscript 0 is the queue head. The so-called enqueue operation actually adds an element to the end of the queue without moving any element, so the time complexity is O(1), as shown in Figure 4-12-1.

Different from the stack, the queue element is published at the head of the queue, that is, the position with subscript 0, which means that all elements in the queue must move forward to ensure that the head of the queue, that is, the position with subscript 0, is not empty. In this case, the time complexity is O(n), as shown in Figure 4-12.

If you don’t limit the fact that the elements of the queue must be stored in the first n cells of the array, the performance of the queue will be greatly improved. That is, the team leader does not have to be in the position marked 0, as shown in Figure 4-12-3.

To avoid the inconvenience of having the head and tail of the queue overlap when there is only one element, we introduce two Pointers, front to the head element and rear to the next position of the tail element, so that when front equals rear, the queue is not left with one element, but empty.

Assume an array of length 5, initial state, empty queue as shown on the left of Figure 4-12-4. Front and rear Pointers both point to 0. Then join a1, A2, A3 and A4. The front pointer still points to the position with subscript 0, while the rear pointer points to the position with subscript 4, as shown on the right of Figure 4-12-4.

When a1 and A2 leave the queue, the front pointer points to the position with subscript 2, and rear remains unchanged, as shown in the left picture of Figure 4-12-5. When A5 enters the queue, the front pointer remains unchanged, and the rear pointer moves out of the array. Huh? Out of the array, where would that be? As shown on the right of Figure 4-12-5.

The problem does not stop there. We assume that the total number of queues is not more than 5, but if we join the queue now, because the last element of the array is occupied, and we add it back, we will generate an out-of-bounds error. In fact, our queue is still free at the subscripts 0 and 1. We call this phenomenon “false overflow”.

Circular queue definition

We call the sequential storage structure of the queue, head to tail, a circular queue.

Continuing with the previous example, the rear of figure 4-12-5 can be changed to point to the position with subscript 0, so that the pointer does not point to an unknown position, as shown in Figure 4-12-6.

Then enter A6 and place it at subscript 0, rear pointer at subscript 1, as shown on the left of Figure 4-12-7. If you join team A7 again, the rear pointer coincides with the front pointer and points to the position with subscript 2, as shown in the right figure of Figure 4-12-7.

  • Now the problem is again, we said, when the queue is empty, fronr is equal to rear, now when the queue is full, front is equal to rear, so how do you tell whether the queue is empty or full?
  • If front== rear and flag= 0, the queue is empty. If front== rear and flag= 1, the queue is full.
  • The second way is when the queue is empty, the condition is front = rear, and when the queue is full, we modify the condition to keep one element space. That is, when the queue is full, there is still one free cell in the array. For example, as shown in Figure 4-12-8, we consider the queue to be full, that is, we do not allow the situation on the right of Figure 4-12-7 to occur.

We’re going to focus on the second one, because the rear could be bigger than the front, or it could be smaller than the front, so even though they’re only one place apart, they could be a full circle apart. So if the maximum size of the queue is QueueSize, then the queue is full if (rear+1) %QueueSize==front. In this example, QueueSize = 5, front=0, rear=4, (4+1) %5 =0, so the queue is full. In the right picture of Figure 4-12-8, front = 2 and rear = 1. (1 + 1) %5 = 2, so the queue is also full. For figure 4-12-6, front=2, rear= 0, (0+1) %5 = 1, 1 ≠ 2, so the queue is not full.

In addition, when rear> front (right of figure 4-12-4 and left of figure 4-12-5), the length of the queue is rear-front, but when rear < front (left of Figure 4-12-6 and Figure 4-12-7), the length of the queue is divided into two sections. One is queuesize-front, the other is 0 + rear, and together, the queue length is rear-front + QueueSize. Therefore, the general formula for calculating the queue length is (rear-front + QueueSize) %QueueSize

Queue chain storage structure and implementation

The chain storage structure of queue is actually a single linked list of linear lists, but it can only go in and out at the end, we call it chain queue for short. For the convenience of operation, we point the queue head pointer to the head node of the chain queue, and the tail pointer to the terminal node, as shown in FIG. 4-13-1

When the queue is empty, front and rear point to the head node, as shown in Figure 4-13-2.

The chain storage structure of the queue is queued one by one

When a team operates, it inserts nodes at the end of the list, as shown in Figure 4-13-3.

The chain storage structure of a queue operates one by one

When queuing up, the successor of the head node is queuing up, and the successor of the head node is changed to the node behind it. If there is only one element in the linked list except the head node, rear is pointed to the head node, as shown in Figure 4-13-4.

For circular queue compared with chain of queues, can be considered from two aspects, from the time, in fact they base this is constant time operation, namely 0 (1), but the circular queue is a good space in advance to apply for, will not be released during use, for the chain queue, for each application and release node will exist some time overhead, who asked if the team frequently, There are still slight differences. For space, the circular queue must have a fixed length, so there is the problem of storing the number of elements and wasting space. The chain queue does not have this problem, and although it requires a pointer field and incurs some space overhead, it is acceptable. So in space, chain queues are more flexible. In general, circular queues are recommended when you can determine the maximum queue length, and chain queues are recommended when you cannot predict the queue length.

conclusion

Stacks and queues, which are special linear tables with restrictions on insert and delete operations.

A stack is a linear array that limits insert and delete operations to the end of a table.

A queue is a linear list that allows only inserts at one end and deletes at the other.

All of them can be realized by the sequential storage structure of linear tables, but all of them have some disadvantages of sequential storage.

So they each have their own tricks to solve this problem.

For stacks, if you have two stacks of the same data type, you can use both ends of the array as the bottom of the stack to share data between the two stacks, which maximizes the use of array space. For queues, to avoid the need to move data when arrays are inserted and deleted, circular queues are introduced so that the head and tail of the queue can loop through the array. The time loss of moving data is solved, making the time complexity of insertion and deletion O(n) become O(1). They can also be implemented through the chained storage structure, which is basically the same as linear tables in principle, as shown in Figure 4-14-1.

Reference: Big Talk Data Structure

Thank you for taking the time to read to the end! : D

The back end of a, silently moving brick lu code, if you feel good welcome to pay attention to my public number