@

  • 1.1. Origins of data structures
  • 1.2 basic concepts and terminology
  • 1.3 logical structure and physical structure
    • 1.3.1 Logical structure
    • 1.3.1 Physical structure
  • 2.1. What is an algorithm?
  • 2.1. Characteristics of the algorithm
  • 2.2 Requirements for algorithm design
  • 2.3 algorithm analysis
    • 2.3.1 time complexity
    • 2.3.2 Space complexity


Blink of an eye university graduation has been more than a year, computer major four basic courses — “data structure”, “computer network”, “computer composition principle”, “operating system”, at that time learn really careless, to now has been about to return. “The foundation is not firm, the earth shakes”, once stole lazy, now have to give it back.

Insert a picture description here


1. Data structure

1.1. Origins of data structures

In 1968, The First volume of Computer Programming Art written by Professor Donald E.Knuth in the United States, Basic Algorithms, systematically described the logical structure and storage structure of data and its operation, and created the curriculum system of data structure.

In the early 1970s, large-scale programs appeared, and software began to be relatively independent. Structural programming became the main content of programming methodology. People paid more and more attention to “data structure”, thinking that the essence of programming is to choose a good structure for a certain problem and design a good algorithm.

Programming = data structure + algorithm


1.2 basic concepts and terminology

  • Data: it is the symbol that describes objective things, the object that can be operated in the computer, and the symbol set that can be recognized by the computer and input to the computer.
  • Data element: A meaningful unit of data, usually treated as a unit in a computer. Also known as records.
  • Data items: A data element can consist of several data items.
  • Data object: is a collection of data elements of the same nature, is a subset of data

Let’s focus on what data structures are:

Structure, simply understood as relationships, like molecular structure, is the arrangement of the atoms that make up a molecule. Strictly speaking, structure refers to the way the components fit together and arrange themselves. In the real world, different data elements are not independent but have specific relationships, which we call structures.

What is the data structure?

Data structure: A collection of data elements that have one or more specific relationships with each other.

In a computer, data elements are not isolated and disorderly, but intrinsically related data sets. The organization of data in one or more specific relationships between data elements.


1.3 logical structure and physical structure

There is one or more specific relationships between data elements, and let’s see what that relationship refers to.

According to the point of view, we divide data structure into logical structure and physical structure.


1.3.1 Logical structure

Logical structure reflects the logical relationship between data elements, in other words, it is the mathematical model abstracted from the operation object, so it is also called abstract structure, usually we used to say that the data structure generally refers to the logical structure.

Here are a few logical constructs:



Insert a picture description here


Insert a picture description here


Insert a picture description here


1.3.1 Physical structure

Physical structure Structure is the storage form of data in the computer, also known as storage structure. It includes representation of data elements and representation of relationships.

There are two storage structures for data elements: sequential storage and chain storage.

  • Sequential storage

Sequential storage structure is to store data elements in storage units with contiguous addresses, and the logical and physical relationships among the data are consistent.

Insert a picture description here


  • Chain store

The chain storage structure stores data elements in any set of storage units, which may be continuous or discontinuous. The storage relationships of data elements do not reflect their logical relationships.

Insert a picture description here


2, the algorithm

2.1. What is an algorithm?

Let’s look at the definition of the algorithm:

An algorithm is a description of the steps required to solve a particular problem, and is represented in a computer as a finite sequence of instructions, each of which represents one or more operations.

In layman’s terms, an algorithm is a method or process for solving a problem. A problem can be solved by multiple algorithms, and a given algorithm solves a particular problem.

So what do data structures have to do with algorithms? Algorithms and data structures are closely related.

Data structures cannot be determined without understanding the algorithmic requirements imposed on the data; On the contrary, the structural design and selection of algorithms depend on the data structure as its foundation.

That is, data structures provide tools for algorithms. Algorithms use these tools to implement optimal solutions to problems.

In the field of computer, an algorithm is essentially an operation applied on the basis of the logical structure and storage structure of data according to the needs of problem solving.

Because the logical and storage structure of the data is not unique, the description of the algorithm may not be unique. Even if the logical and storage structures are the same, the algorithm may not be unique. By learning data structures, the programmer can choose a better algorithm.


2.1. Characteristics of the algorithm

The algorithm has five basic characteristics: input, output, fineness, determinacy and feasibility.

  • Input/output

The input and output characteristics are relatively easy to understand, and the algorithm has zero or more inputs. While input parameters are necessary for the vast majority of algorithms, for rare cases, such as printing “Hello world! Such code does not require any input parameters, so the algorithm can have zero input. The algorithm has at least one or more outputs. The algorithm must have outputs. If it does not need outputs, then the algorithm is meaningless.

  • A finite algorithm terminates automatically without an infinite loop after a finite number of steps are executed, and each step is completed in an acceptable amount of time. In reality, code is often written in an infinite loop, which is not satisfied with the finite.

  • deterministic

Each step of the algorithm has a definite meaning and there is no ambiguity. Algorithm under certain conditions, there is only one execution path, the same input can only have a unique output result. Each step of the algorithm is precisely defined without ambiguity.

  • The feasibility of

Each step of the algorithm must be feasible, that is, each step can be performed a finite number of times. Feasibility means that the algorithm can be converted to run on the program and get correct results.

2.2 Requirements for algorithm design

A good algorithm should be designed with the following goals in mind: correctness, readability, robustness, efficiency, and low storage requirements.


2.3 algorithm analysis

Algorithms are tools for solving computational problems. Whether an algorithm is good or bad directly affects the efficiency of problem solving. In order to improve the efficiency of problem solving, a solution to a specific problem should not only consider the specific description of the algorithm, but also have a method to measure the quality of the algorithm.

Algorithm analysis mainly refers to judge the merits of an algorithm, judging the merits of an algorithm is generally considered from two aspects, that is, from the perspective of time and space to measure the algorithm. The general algorithm analysis considers more from the time Angle. Of course, judging the good or bad of an algorithm cannot be simplified by measuring time or space only, but should be considered comprehensively according to the actual situation.

There are two main methods for algorithm measurement:

  • Posthoc statistical method

The disadvantage of this method is that the program compiled according to the algorithm must be run first; The statistics of the time spent depend on computer software, hardware and other environmental factors, easy to cover the advantages and disadvantages of the algorithm itself. Therefore, people often use the second method.

  • Pre-analysis estimation method

The time it takes for a program written in a high-level language to run on a computer depends on the following factors: • the strategy chosen by the algorithm • the size of the problem • the language in which the program is written • the quality of the machine code produced by compilation • the speed with which the machine executes instructions

Of course, regardless of the hardware direction, algorithm efficiency depends on, or is a function of, the size of the problem.

But in practice, we can use a combination of the two methods. In general, we call the input of the algorithm to solve the problem the size of the problem, represented by an integer n. For example, the size of a matrix product problem is the order of a matrix, while the size of a graph theory problem is the number of vertices or edges in a graph.

In each step of the algorithm, there may be several statements, and frequency refers to the number of times each statement is executed. The time complexity of an algorithm refers to the time consumption of the algorithm.

To put it simply, it takes the execution time of a basic statement as the basic unit, and the total execution times of basic statements in all statements of the algorithm is the time consumption of the algorithm, which is a function of the problem size N solved by the algorithm. When the size n of the problem approaches infinity, we call the quantitative order of time complexity the asymptotic time complexity of the algorithm.


2.3.1 time complexity

Definition of algorithm time complexity:

In algorithm analysis, the total number of statements executed T (n) is a function of the problem size n. The change of T (n) with n is further analyzed and the magnitude of T (n) is determined. The time complexity of the algorithm, that is, the time measure of the algorithm, is denoted as T (n) = O(f(n)). It means that with the increase of problem size N, the growth rate of algorithm execution time is the same as that of F (n), which is called the asymptotic time complexity of the algorithm, or time complexity for short. Where f (n) is some function of the size n of the problem.

Time complexity is represented by a capital O(), called the big O notation.

Insert a picture description here





2.3.1.1 constant order

The time complexity of sequential structures.

For example, the following algorithm:

        int sum=0,n=100;    // Execute once

        sum=sum+n/2;        // Execute once

        System.out.println(sum);    // Execute once

Copy the code

The number of runs of this algorithm is f (n) =3. The number of executions does not increase as n changes, so the time complexity of this algorithm is O(1).

Except for sequential structures, for branching structures, whether true or false, the number of executions is constant and does not change as n increases, so the time complexity of pure branching structures (not included in the loop structure) is also O(1).


2.3.1.2 linear order

The linear order loop structure is much more complicated. To determine the order of an algorithm, we often need to determine how many times a particular statement or set of statements runs. Therefore, to analyze the complexity of the algorithm, the key is to analyze the operation of the loop structure.

        for(int i=0; i<n; i++){

            // Time complexity O(1) operation

            System.out.println(n);

        }

Copy the code

The time of this algorithm is O(n), because the algorithm has to be executed n times.


2.3.1.3 Logarithmic order

If you look at the following algorithm, at first glance the time complexity is order n.

        int n=10;

        int count=1;

        while (count<n){

            count=count*2;

            // Time complexity O(1) operation

            System.out.println(count);

        }

Copy the code

Every time we multiply count by 2, we get one point closer to n. In other words, the loop exits after log2 to the NTH power, so the time of the loop is order logn.


2.3.1.4 Square order

Take a look at the following double loop, nesting O(n) code again. It’s m times n, order n squared.

        for (int i=0; i<n; i++){

            for (int j=0; i<m; j++){

                // Time complexity O(1) operation

                System.out.println(i*j);

            }

        }

Copy the code

I extend it to the square, order n cubed, order N to the K, order N to the K, which is also very easy to understand, 3 cycles, K cycles.


2.3.2 Space complexity

Space complexity is a measure of the amount of storage space temporarily occupied by an algorithm during its operation. It also reflects a trend and is defined by S(n).

In general, when a program is executed on a machine, it needs to store its own instructions, constants, variables, and input data, as well as storage units that operate on the data. If the space occupied by the input data only depends on the problem itself and has nothing to do with the algorithm, it only needs to analyze the auxiliary units required by the algorithm when it is implemented. If the auxiliary space required by the algorithm is a constant with respect to the input data S, the algorithm is said to work in place and the space complexity is O(1).

In general, we use “time complexity” to refer to run-time requirements and “space complexity” to refer to space requirements. When “complexity” is used without a qualifier, it usually means time complexity.













Reference:

[1] : Deng Junhui. Data Structure and Algorithm [2] : Wang Shimin et al. Data Structure and Algorithm Analysis [3] : “Data-structures -and Algorithm-In-Java -6th-Edition” [4] : Yan Weimin, Wu Weimin. “Data Structures” [5] : [6] : VisuAlgo [7] : Easy to understand time complexity by watching animation (I)