This is the second day of my participation in the November Gwen Challenge. Check out the details: the last Gwen Challenge 2021

Algorithm complexity

Complexity concept

The running of the program needs to consume a certain amount of time resources and space (memory) resources. Therefore, the quality of an algorithm is generally measured from two dimensions of time and space, namely time complexity and space complexity.

Time complexity mainly measures how fast an algorithm runs, while space complexity mainly measures how much extra space an algorithm needs to run. In the early days of computer development, the storage capacity of computers was very small, so the space complexity was very important. Nowadays, computer storage capacity has reached a high degree, no longer need to pay special attention to the spatial complexity of an algorithm.

Time complexity

Time complexity definition

The time complexity of an algorithm is a function that quantitatively describes the running time of the algorithm. Theoretically speaking, it is impossible to calculate the specific time spent on the execution of the algorithm, and even if the program running time is actually calculated, it cannot describe the advantages and disadvantages of the algorithm due to various reasons such as the performance of the machine.

And machine measurement is too cumbersome, hence the time complexity of the analysis method. Time complexity does not count specific times but the number of times basic operations are performed in the algorithm. To find a mathematical expression between a basic statement and the problem size NNN is to calculate the time complexity of the algorithm.

Count the number of times the ++count statement in your code is executed.

void Func(int N) {
	int count = 0;
	for (int i = 0; i < N; ++i) {
		for (int j = 0; j < N; ++j) { ++count; }}for (int k = 0; k < 2 * N; ++k) {
		++count; 
    }
	int M = 10;
	while (M--) {
		++count;
        printf("hehe\n"); }}Copy the code

In this case, F(N)=N2+2N+10F(N)=N^2+2N+10F(N)=N2+2N+10.

Big O asymptotic notation

When N=10, F(N)=130, when N=100, F(N)=10210, when N=1000, F(N)=1002010.

As you can see, such a precise function is not very useful in practical application, and only needs about the number of times. When the code is executed a certain number of times, the effect of the smaller term at the end of the equation becomes very small, and the retention of the largest term basically determines the result. In order to more conveniently calculate and describe the complexity of the algorithm, a large O asymptotic representation is proposed.

Rules for derivation of large O order

Big O symbol: Mathematical symbol used to describe the asymptotic behavior of a function.

  1. If the number of executions is constant and independent of N, it is expressed by constant 1.
  2. Only the highest order term in the run-time function with the coefficient omitted is retained.
  3. If the algorithm has a best – worst case, focus on the worst case.

It can be concluded that the large O order of time complexity of the above algorithm is O(N2)O(N^2)O(N2).

Example 1
void Func1(int N, int M) {
	int count = 0;
	for (int k = 0; k < M; ++k) {
		++count;
	}
	for (int k = 0; k < N; ++k) {
		++count;
	}
	printf("%d\n", count);
}
Copy the code

Subject’s time complexity is O (N + M) O (N + M) O (N + M), if the marked N > > MN > > MN > > M, complexity is O (N) O (N) O (N), the opposite is O (M) O (M) O (M), if marked the two similar is O (N) O (N) O (N) or O (M) O (M) O (M). If MMM and NNN are known constants, the complexity is O(1)O(1)O(1).

NNN is usually used for unknowns, but MMM, KKK and so on.

Example 2
void Func2(int N) {
	int count = 0;
	for (int k = 0; k < 100; ++ k) {
		++count;
	}
	printf("%d\n", count);
}
Copy the code

In this case, the number of runs is constant, so no matter how big the constant is, the time complexity is O(1)O(1)O(1).

Example 3
void BubbleSort(int* a, int n) {
	assert(a);
	for (size_t end = n; end > 0; --end) {
		int exchange = 0;
		for (size_t i = 1; i < end; ++i) {
			if (a[i - 1] > a[i]) {
				Swap(&a[i - 1], &a[i]);
				exchange = 1; }}if (exchange == 0)
			break; }}Copy the code

Some algorithms have best case, worst case. We usually assume the worst case for complexity calculations. Very few algorithms look at averages.

Bubble sort is one of them, and we analyze its worst case. Compared with the two adjacent numbers, the first trip swaps N− 1n-1n −1 times, and the second trip swaps N− 2n-2N −2 times… , N− IN-IN − I times for the third trip. Therefore, the exact number of algorithms is F(N)=N−1+N−2+… + I + N -… + 1 + 0 = N * (N – 1) / 2 f (N) = N – 1 + 2 + N… +N-i+… + 1 + 0 = N * (N – 1) / 2 f (N) = N – N – 1 + 2 +… + I + N -… + 1 + 0 = N * (N – 1) / 2. So the complexity is O(N2)O(N^2)O(N2).

You can also look at the number of comparisons, because the last one of each trip only compares and does not swap, so each trip has one more comparison than the number of swaps. But it doesn’t affect its complexity.

Example 4
int BinarySearch(int* a, int n, int x) {
	assert(a);
	int begin = 0;
	int end = n - 1;
	while (begin < end) {
		int mid = begin + ((end - begin) >> 1);
		if (a[mid] < x)
			begin = mid + 1;
		else if (a[mid] > x)
			end = mid;
		else
			return mid;
	}
	return - 1;
}
Copy the code

The complexity of an algorithm depends not only on the number of cycles, but also on the idea of the algorithm. Binary lookup also has a best-case and worst-case scenario, and still has to analyze its worst-case (not found).

For such a case of folding in half each time, can be figuratively understood by “origami method”, a piece of paper folded in half once, remove half, fold in half again, and then abandon, assuming a total of XXX times, to find the number. So 2x is equal to N2 to the x is equal to N2x is equal to N, so x is equal to log2Nx is equal to log2Nx is equal to log2N.

Log order O(log2N)O(log2N)O(log2N)O(log2N), or you can omit the base and write it as O(logN)O(logN)O(logN). Binary search this logarithmic order is a very good algorithm, 20=log2(1000000)20=log_2(1000000)20=log2(1000000), a million numbers only need to look up 20 times.

Example 5
long Factorial(size_t N)
{
	if (0 == N)
		return 1;
	return Fac(N - 1) * N;
}
Copy the code

The complexity of a recursive algorithm depends on two factors: the depth of the recursion and the number of recursive calls per recursion.

The recursion depth is the total number of recursion levels, i.e. the number of stack frames created. The number of recursive calls per recursion is the number of calls to itself within the recursive function.

The depth is O(N)O(N)O(N), and the number of calls is 111, so the complexity is O(N)O(N)O(N).

Example 6
long Fibonacci(size_t N)
{
    if(N < 3)
        return 1;
    return Fib(N- 1) + Fib(N2 -);
}
Copy the code

The idea of Fibonacci recursion is similar to binary trees, but with a part missing, as shown in the figure below:

The missing part is set to XXX, and the exact degree is F(N)=20+21+22+… + 2 N – 1 – X = 2 N – 1 – XF (N) = 2 ^ 2 ^ 0 + 1 + 2 ^ 2 +… +2^{N-1}-X=2^N-1-XF(N)=20+21+22+… +2N−1−X=2N−1−X. Because XXX is much smaller than 2N−12^ n-12n −1, the algorithm complexity is O(N)=2NO(N)=2^NO(N)=2N.

 

Spatial complexity

Spatial complexity definition

Space complexity is also a mathematical expression that measures the amount of extra space temporarily taken up while the algorithm is running. Similarly, space complexity is not a meaningless number of bytes actually occupied, but a calculation of the number of variables temporarily opened. Basic rules Rules are similar to time complexity and use large O asymptotic notation.

Example 1
void BubbleSort(int* a, int n) {
	assert(a);
	for (size_t end = n; end > 0; --end) {
		int exchange = 0;
		for (size_t i = 1; i < end; ++i) {
			if (a[i - 1] > a[i]) {
				Swap(&a[i - 1], &a[i]);
				exchange = 1; }}if (exchange == 0)
			break; }}Copy the code

Bubble sort algorithm creates only constant variables, so the space complexity is O(1)O(1)O(1).

Although the variables end and I are created once each time, from a memory perspective, the amount of space taken up each time does not change, and it is usually created in the same space.

Example 2
long long* Fibonacci(size_t n) {
    if (n == 0)
        return NULL;
    long long* fibArray = (long long*)malloc((n + 1) * sizeof(long long));
    fibArray[0] = 0;
    fibArray[1] = 1;
    for (int i = 2; i <= n; ++i) {
        fibArray[i] = fibArray[i - 1] + fibArray[i - 2];
    }
    return fibArray;
}
Copy the code

Including loop variables and the Fibonacci array, open up NNN variables of magnitude. Therefore, the space complexity is O(N)O(N)O(N).

Example 3
long long Factorial(size_t N)
{
    if(N == 0)
        return 1;
    return Fac(N - 1) * N;
}
Copy the code

Each stack frame is recursively created, and each stack frame contains constant variables. NNN The space complexity of the recursion is O(N)O(N)O(N).

The spatial complexity of recursion is related to the depth of recursion.

Example 4
long Fibonacci(size_t N)
{
    if(N < 3)
        return 1;
    return Fib(N- 1) + Fib(N2 -);
}
Copy the code

Each Fibonacci recursion also creates constant variables. As can be seen from the Fibonacci stack frame creation diagram, there will be repeated items in recursion, and these repeated stack frames are created and destroyed. Space, unlike time, is reusable, so these repeated stack frames take up space only once. Fib so Fib Fib (N) (N) (N), Fib (N – 1) Fib (N – 1) Fib (N – 1),… Fib(1)Fib(1)Fib(1) each of these stack frames is allocated enough space once. Therefore, the time complexity is O(N)O(N)O(N).

 

Common complexity

The complexity of common algorithms is shown in the following table, increasing from top to bottom:

Referred to as” O said The sample
Constant of the order
O ( 1 ) O(1)

k k
The logarithmic order
O ( l o g n ) O(logn)

k l o g 2 n klog_2n
Linear order
O ( n ) O(n)

k n kn
The logarithmic order
O ( n l o g n ) O(nlogn)

k l o g 2 n klog_2n
Square order
O ( n 2 ) O(n^2)

k n 2 kn^2
Cubic order
O ( n 3 ) O(n^3)

k n 3 kn^3
The index order
O ( 2 n ) O(2^n)

k 2 n k2^n
Factorial order
O ( n ! ) O(n!)

k n ! kn!

The lowest is constant order O(1)O(1)O(1), followed by logarithmic order O(logn)O(logn)O(logn) O(logn)O(logn), followed by linear order O(n)O(n)O(n) O(n^2)O(n2) O(n2)O(2n)O(2^n)O(2n). The first three are good algorithms, and the square is a complex algorithm, exponential factorial algorithm is absolutely not desirable.

 

Problem OJ

Vanishing numbers
Idea 1

Sort the array first and check the difference between adjacent elements in the sorted result. If the difference is not 1, the missing value between the two is the missing number.

The time complexity is O(nlog2n)O(Nlog_2N)O(nlog2n), and the space complexity is O(1)O(1)O(1) O(1)

int cmp_int(const void* e1, const void* e2) {
    return* (int*)e1 - *(int*)e2;
}
int missingNumber(int* nums, int numsSize) {
    int flag = 1;
    //qsort
    qsort(nums, numsSize, sizeof(nums[0]), cmp_int);
    // Number of elements is 1
    if (numsSize == 1) {
        return numsSize - nums[0];
    }
    for (int i = 0; i < numsSize - 1; i++) {
        if (nums[i +1] - nums[i] ! =1) {
            flag = 0;
            return nums[i] + 1; }}// The missing digit is the maximum or 0
    if (flag == 1) {
        if (nums[0] = =0) {
            return numsSize;
        }
        else {
            return 0; }}return 0;
}
Copy the code
Idea 2

Write an element in an array to the corresponding subscript of another array. The subscript of a position with no value is the vanishing number.

The time complexity is O(n)O(n)O(n) O(n), and the space complexity is O(n)O(n)O(n) O(n)

int missingNumber(int* nums, int numsSize) {
    int tmp[200000] = { 0 };
    memset(tmp, - 1.200000 * sizeof(int));
    // Move the element in
    for (int i = 0; i < numsSize; i++) {
        tmp[nums[i]] = nums[i];
    }
    // Find the location
    for (int i = 0; i <= numsSize; i++) {
        if(tmp[i] == - 1) {
            returni; }}return 0;
}
Copy the code
Idea 3

Subtract the sum of the elements 0 to n from the sum of the elements in the array, and you get the missing number.

The time complexity is O(n)O(n)O(n), and the space complexity is O(1)O(1)O(1) O(1)

int missingNumber(int* nums, int numsSize) {
    int sumOfNum = 0;
    int sumOfNums = 0;
    for (int i = 0; i <= numsSize; i++) {
        sumOfNum += i;
    }
    for (int i = 0; i < numsSize; i++) {
        sumOfNums += nums[i];
    }
    return sumOfNum - sumOfNums;
}
Copy the code
Thinking of 4

If XXX and [0,n][0,n][0,n] are traversed xOR, and array elements are traversed xOR, the final result is the missing number.

The time complexity is O(n)O(n)O(n), and the space complexity is O(1)O(1)O(1) O(1)

int missingNumber(int* nums, int numsSize) {
    int xor = 0;
    // and [0,n] xor
    for (int i = 0; i <= numsSize; i++) {
        xor ^= i;
    }
    // And array xor
    for (int i = 0; i < numsSize; i++) {
        xor ^= nums[i];
    }
    return xor;
}
Copy the code
Rotate the array
Idea 1

Insert the last element of the array into the header once, loop KKK times.

The time complexity is O(k×n)O(K ×n)O(k× N)O(k× N), and the space complexity is O(1)O(1)O(1) O(1)

void rotate(int* nums, int numsSize, int k) {
    while (k--) {
        int tmp = nums[numsSize - 1];
        int end = numsSize - 1;
        while (end > 0) {
            nums[end] = nums[end - 1]; end--; } nums[end] = tmp; }}Copy the code
Idea 2

Open an array of the same size, the last N − KN-KN − K elements are transferred to the past, and KKK elements are returned before the transfer.

The time complexity is O(n)O(n)O(n) O(n), and the space complexity is O(n)O(n)O(n) O(n)

void rotate(int* nums, int numsSize, int k) {
    int tmp[200] = { 0 };
    / / k
    for (int i = 0; i < k; i++) {
        tmp[i] = nums[numsSize - k + i];
    }
    / / k
    for (int i = 0; i < numsSize - k; i++) {
        tmp[i + k] = nums[i];
    }
    / / transfer
    for (int i = 0; i < numsSize; i++) { nums[i] = tmp[i]; }}Copy the code
Idea 3

The former N − KN-KN − K elements are inverted, the latter KKK elements are inverted, and then the whole is inverted.

The time complexity is O(n)O(n)O(n), and the space complexity is O(1)O(1)O(1) O(1)

void reserve(int* nums, int left, int right) {
    while (left < right) {
        inttmp = nums[left]; nums[left] = nums[right]; nums[right] = tmp; left++; right--; }}void rotate(int* nums, int numsSize, int k) {
    //1. The first n-k inverses
    reserve(nums, 0, numsSize -k - 1);
    //2
    reserve(nums,numsSize - k, numsSize - 1);
    //3
    reserve(nums, 0, numsSize - 1);
}
Copy the code