The method of learning algorithms

  • Three to study and seven to practice
  • Knowledge points and difficulties to learn repeatedly, until learning, play slowly, avoid by all means watch a book, video and so on.
  • Get used to it. Get professional
  • Look at the international version of the master code, repeated practice.

Leetcode steps for solving a single problem

  • Read the question, be sure to understand the meaning of the question
  • Think about as many ideas as you can and list the complexity of time and space.
  • Write the code
  • test

Leetcode brushes the question five times

  • Avoid by all means dead knock, 5 minutes will not, immediately see the solution.
  • The first time, look at the problem solution, compare the pros and cons of different solutions, recite excellent problem solution.
  • Second time, don’t look at the solution, write yourself; Practice each problem; Look at the master code
  • Third time, 24 hours
  • Fourth time, one week
  • Fifth time, before the interview

How to master a field

  • Chopped knowledge points, quantification, knowledge context
    • Data structure context
    • Algorithm context
  • Deliberate practice
    • One point, one question repeated practice, count.
  • Get feedback
    • Active feedback, look at the master code
    • Passive feedback, expert guidance

The data structure

  • One dimension:
    • Basics: Array (string), linked list
    • Advanced: stack, queue, deque, set, Hash map (hash or map), etc
  • 2 d:
    • Base: tree, graph graph
    • Advanced: Binary search tree (red-black tree, AVL), heap search, disjoint set, dictionary tree Trie, etc
  • Special:
    • Bitwise, BloomFilter • LRU Cache

algorithm

  • Almost all problems in Leetcode end up looking for repeating units and solving them using the following three methods
    • If – else, switch – > branch
    • For, while loop — > Iteration
    • Recursion (Divide & Conquer, Backtrace)
  • Advanced algorithms for generalization
    • Such as our company, our company, our company, our company, our company, our company, our company, etc
      • Dynamic Programming
      • Binary Search
      • Greedy Greedy
      • Math, Geometry

Top-down programming

  • clean code
  • The high-level trunk logic, together, is at the top, and the function details are below.

He that will do his work well must first sharpen his tools

  • androidstudio + vscode
  • Ide use tips, shortcuts

The first week

Time complexity

  • How to understand the time complexity representation of an algorithm
  • The relationship between n and the number of times a statement is executed
  • How do I solve recursion for time?
    • Let me draw the recursion tree

– The main theorem – Binary search, O(logn), half, half, half – binary tree traversal, O(n), all nodes look up, N – two-dimensional ordered matrix search, O(n) – merge sort, O(nlogn)

Spatial complexity

  • Array length
  • Depth of recursion
  • If there is an array in the recursion, compare the depth of the recursion with the array and take the largest one.

An array of

How it works: When you create an array, the computer memory manager opens up a sequence of addresses, each of which can be accessed directly by the memory manager.

So for arrays

  • The time complexity of accessing a single element is O(1).
  • To delete an element, all elements following the element must be moved forward, time complexity, worst O(n), best O(1).
  • When I insert an element, I want to move all the elements after that element back, in time, O(n) at worst, O(1) at best.

Java implementation of ArrayList

The list

Single linked list, implementation

Two-way linked listSo for linked lists

  • Insert and delete nodes, time complexity, O(1)
  • Query node, time complexity, best O(1), worst O(n)

Related Topics:

  • Leetcode 146.LRU caching mechanism

The skip jump table list

For ordered linked list, consider the “ordered” characteristic, design a data structure, reduce its query time complexity, namely skip table. The object of the hop table is balanced tree and binary search, which is a data structure in which insert, delete and search are logn.

  • The implementation of the hop table, the idea of ascending dimension, one to two, space for time

  • The time complexity of insert, delete, and query is O(logn), and the space complexity is O(n).
  • Real application: Redis, levelDB.
    • Why does Redis use skiplist instead of Red-Black?
    • Skip table implementation:
  • It’s mainly about understanding

Stack Stack

First in, then out; Add and delete are O(1) Stack source code in Java

Using stack in Java:

Stack<Integer> stack = new Stack<>(); stack.push(1); stack.push(2); stack.push(3); stack.push(4); System.out.println(stack); System.out.println(stack.search(4)); stack.pop(); stack.pop(); Integer topElement = stack.peek(); System.out.println(topElement); System.out.println(" 3 "+ stack.search(3));Copy the code

What problems are solved by stacks:

  • Most recently relevant topics
  • Maximum area problem of histogram

Queue Queue

First in, first out; Add and delete are O(1) Queue source code in Java

Queue in Java uses:

Queue<String> queue = new LinkedList<String>();
queue.offer("one");
queue.offer("two");
queue.offer("three");
queue.offer("four");
System.out.println(queue);
String polledElement = queue.poll();
System.out.println(polledElement);
System.out.println(queue);
String peekedElement = queue.peek();
System.out.println(peekedElement);
System.out.println(queue);
while(queue.size() > 0) {
  System.out.println(queue.poll());
}
Copy the code

Double-endian queue Deque

A double ended queue can be inserted or deleted at both ends of the queue. Both insert and delete are O(1) operations

A deque in Java:

Deque<String> deque = new LinkedList<String>();
deque.push("a");
deque.push("b");
deque.push("c");
System.out.println(deque);
String str = deque.peek();
System.out.println(str);
System.out.println(deque);
while (deque.size() > 0) {
  System.out.println(deque.pop());
}
System.out.println(deque);
Copy the code

Topic:

  • Sliding window problem

Priority queue

Some order is maintained, and the elements are removed according to priority

  • Insert operation :O(1)
  • Fetch operation :O(logN) – Fetch by element priority
  • The underlying data structures are diverse and complex: HEAP, BST, TREAP
  • Java PriorityQueue

In the second week of

Hash tables, maps, sets

A Hash table, also known as a Hash table, is a data structure that is accessed directly based on Key values. It accesses records by mapping key values to a location in the table to speed up lookups. This mapping Function is called a Hash Function, and the array of records is called a Hash table (or Hash table).

Hash functions, the functions that compute hash values, as scattered as possible, as unique as possible

Hash conflicts, if the value of the hash function is repeated, increase the dimension, use a linked list to resolve

Complete structure

Tree, binary tree, binary search tree

Relationships between linked lists, trees, and graphs

Singly linked lists

The tree

Binary tree

figure

Features:

  • LinkedList is a specialized Tree with only one child per node
  • Binary trees are specialized trees that have at most two children per node.
  • A Tree is a specialized Graph, an acyclic Graph.

Binary tree

Binary tree traversal:

  • Pre-order: root-left-right
  • In-order: left-root-right
  • Post-order: left-right-root

Java code, tree node definition:

public class TreeNode { public int val; public TreeNode left, right; public TreeNode(int val) { this.val = val; this.left = null; this.right = null; }}Copy the code

Binary Search Tree

A Binary search Tree, also known as a Binary search Tree, Ordered Binary Tree, Sorted Binary Tree, refers to an empty Tree or a Binary Tree with the following properties:

  1. The value of all nodes in the left subtree is less than the value of its root node;
  2. The value of all nodes in the right subtree is greater than that of its root node.
  3. And so on: the left and right subtrees are binary search trees respectively. (That’s repetition!)

Features:

  • In order traversal, for ascending order, one dimensional ordered array.
  • Contrast binary search
  • Insert, delete, search, time complexity: O(logn)

Dynamic demonstration binary search tree insert, delete tree traversal Demo

The heap

Heap: a data structure that can quickly find a maximum or minimum value from a Heap of numbers.

The heap with the largest root node is called the big top heap or big root heap, and the heap with the smallest root node is called the small top heap or small root heap.

Time complexity, taking the big top heap as an example:

  • Find the maximum value, O(1)
  • Delete the maximum value, O(logn)
  • Insert a value, non-leaf O(logn), leaf O(1)

There are many ways to implement binary heap, the common heap has binary heap, Fibonacci heap.

  • Heap implementation code
  • Different implementations of the heap

Binary heap

This is done by a complete binary tree.

HeapSort: self-study

Complete binary tree:

  • All nodes except leaf nodes have two nodes.
  • Leaf nodes exist from left to right, and there can be no vacancies.

Binary heap (large top heap) :

  • It’s a perfect tree
  • The value of any node in the tree is always >= the value of its children

Binary heap implementation details:

  • Are generally through the array to store, pictured above,90,40,80,20,60,10,30,50,70 [110100]
  • If the first node has an index of 0 in the array, the parent-child relationship is as follows:
    • The left child of a node whose index is I has an index of 2i+1
    • The right child of a node whose index is I has an index of 2i+2
    • A node whose index is I and whose parent’s index is floor((i-1)/2)

Binary heap insertion, O(logn) :

  • New elements are inserted at the end of the heap. As above, that’s after 70.
  • Adjust the structure of the entire heap upward in turn until it conforms to the characteristics of the binary heap.

Delete binary heap top, O(logn):

  • Delete the top heap element and replace it with the bottom heap node.
  • From the top down, adjust the structure of the heap until it conforms to the characteristics of the binary heap.

Practical application:

  • PriorityQueue PriorityQueue, implemented by binary heap.
  • Usually take the first few XXX type of questions. For example, from the heap number, take the first k smallest, the highest frequency k, etc.
  • Sliding window problems, etc

figure

What is a graph?

Representation of graph:

  • Graph(V,E)
  • V – vertex: point
    • Degrees, degrees in and degrees out. Degree, the number of edges connected to the vertex; In degrees, there are edges that point to that vertex; Degrees, there are edges that point to other vertices
    • Point to point: connectivity or not
  • E-edge: edge
    • Directed and undirected (one-way)
    • Weight (side length)

Graph representation, to achieve:

  • Adjacency matrix,
  • Adjacency list

Classification of pictures:

Common algorithms for graphs

  • DFS
  • BFS

Reference study:

  • Number of connected graphs:
  • Topological Sorting:
  • Shortest Path: Dijkstra
  • Minimum Spanning Tree:

The third week

recursive

Recursive:

  • It’s essentially circular
  • In particular, the loop is implemented by calling itself
  • Go down layer by layer and come back layer by layer

The nature of recursion: traversing all possibilities without repeating them

Writing recursion steps:

  • Defining termination conditions
  • The business code for the current layer
  • Go to the next level
  • Perform current layer cleanup

Template:

Public void recur(int level,int param){if(level>MAX_VALUE){return; } // Process (level,param); Recur (level+1,newParam); CleanCur ()}Copy the code

Schematic diagram:Thinking points:

  • Instead of human recursion,
  • Find the most recent repeating subproblem
  • Applied mathematical induction

Divide and conquer, backtrack

Partition:

  • It’s also recursive
  • Break it down into subproblems

The difference between divide and conquer and general recursion:

  • General recursion is to call itself once, then go down layer by layer, and return layer by layer.

Such as:

public void recur(int level,int param){ ... recur(level+1,newParam); // a call to... }Copy the code
  • Divide-and-conquer is a recursive problem that calls itself multiple times, each time solving a subproblem.

Such as:

public int recur(int param){ ... Int result1 = recur(param1) // subproblem 1 int result2 = recur(param2) // subproblem 2 return result1+result2; // subresult merge... }Copy the code

Divide-and-conquer code templates

Backtracking: Core idea: Try each layer.

Backtracking uses the idea of trial and error, which attempts to solve a problem step by step. In the process of step solution, when it tries to find that the existing step solution can not get an effective and correct solution, it will cancel the calculation of the previous step or even the previous steps, and try to find the answer to the question again through other possible step solution. Backtracking is usually done using the simplest recursive method, and two things can happen after repeating the steps above: • finding a possible correct answer; • Declare that there is no answer to the question after all possible steps have been tried. In the worst case, backtracking results in an exponential time calculation.

The fourth week

Depth-first search and breadth first search

Tree traversal:

Define the nodes of the tree:

Java public class TreeNode { public int val; public TreeNode left, right; public TreeNode(int val) { this.val = val; this.left = null; this.right = null; }}Copy the code

Tree traversal

  • You’re essentially going to visit every node
  • Each node is accessed only once
  • For node access, the order is as follows:
    • Depth first
    • breadth-first

Depth-first search:

  • Each node of the edge is accessed from left to right, starting at the root node.
public List<List<Integer>> levelOrder(TreeNode root) { List<List<Integer>> allResults = new ArrayList<>(); if(root==null){ return allResults; } travel(root,0,allResults); return allResults; } private void travel(TreeNode root,int level,List<List<Integer>> results){ if(results.size()==level){ results.add(new ArrayList<>()); } results.get(level).add(root.val); if(root.left! =null){ travel(root.left,level+1,results); } if(root.right! =null){ travel(root.right,level+1,results); }}Copy the code

Breadth-first search:

  • Start at root and work your way down one layer at a time, from left to right.
public class TreeNode { int val; TreeNode left; TreeNode right; TreeNode(int x) { val = x; } } public List<List<Integer>> levelOrder(TreeNode root) { List<List<Integer>> allResults = new ArrayList<>(); if (root == null) { return allResults; } Queue<TreeNode> nodes = new LinkedList<>(); nodes.add(root); while (! nodes.isEmpty()) { int size = nodes.size(); List<Integer> results = new ArrayList<>(); for (int i = 0; i < size; i++) { TreeNode node = nodes.poll(); results.add(node.val); if (node.left ! = null) { nodes.add(node.left); } if (node.right ! = null) { nodes.add(node.right); } } allResults.add(results); } return allResults; }Copy the code

Greedy algorithm

  • A greedy algorithm is one that takes the best or optimal choice in the current state at each step in the hope that the result will be globally best or optimal.

  • Greedy algorithms differ from dynamic programming in that they choose a solution for each subproblem and cannot fall back. Dynamic planning saves the previous calculation results and selects the current one based on the previous results, providing the rollback function.

Examples of greed being true

Greedy algorithms work under certain conditions, such as the change problem:

Denomination: 20, 10, 5, 1. Use the fewest coins to make 36.

Examples of greed not working

Or change: denomination: 10, 9, 1. Use the fewest coins to make 18. Obviously, the optimal solution is 9, 9, but the greedy algorithm:It’s obviously not optimal

Suitable for greedy situations

  • Greedy method can solve some optimization problems, such as: find the minimum spanning tree in the graph, find Huffman coding

And so on. However, for engineering and life problems, greedy methods generally do not get the answers we require.

  • Optimal substructure

Simply put, a problem can be resolved in subproblems, and the optimal solution of the subproblem can be recurred to the optimal solution of the final problem. The optimal solution of the subproblem is called optimal substructure.

  • Once a problem can be solved by greedy method, greedy method is usually the best way to solve the problem

Good idea. Because of the high efficiency of greedy method and its answers are relatively close to the optimal results, greedy method can also be used as an auxiliary algorithm or directly to solve some problems requiring not particularly accurate results.

Example: handing out cookies

  • Rank cookies and kids
  • Let the cookie just satisfy the child (this is the optimal solution of the current step)
  • Continuous distribution (optimizing each step, contributing to global optimization)

Binary search

Premise:

  • Sequence, monotonically increasing or decreasing
  • There are upper and lower bounds
  • It can be accessed by index

Code template:

public int binarySearch(int[] array, int target) {
    int left = 0, right = array.length - 1, mid;
    while (left <= right) {
        mid = (right - left) / 2 + left;

        if (array[mid] == target) {
            return mid;
        } else if (array[mid] > target) {
            right = mid - 1;
        } else {
            left = mid + 1;
        }
    }

    return -1;
}
Copy the code

The fifth week

6 weeks

Dynamic programming

The key point

  • There is no fundamental difference between dynamic programming and recursion or divide-and-conquer (the key is to see whether there is an optimal substructure, there is no optimal substructure, that is divide-and-conquer; There’s an optimal substructure, that’s dynamic programming.
  • Commonality: Find repeating subproblems
  • Difference: optimal substructure, midway can eliminate the suboptimal solution

Problems solved:

  • Break down a complex problem into simple sub-problems
  • Recursive divide-and-conquer + optimal substructure

Definition of dynamic programming

Key points of dynamic programming

  • Optimal substructure opt[n] = best_of(opt[n-1], opt[n-2],…)
  • Save intermediate state :opt[I]
  • Recursive formula (euphemistically known as state transition equation or DP equation)
    • Top to bottom: Fib: opt[n] = opt[n-1] + opt[n-2]
    • Bottom up:
for(int i=2; i<n; I ++) {f(I) = f(I -1)+f(I -2); }Copy the code
- 2 d path: opt [I, j] = opt [j] + [I + 1] opt [I] [m + 1] (and determine whether a [I, j] space)Copy the code

Steps of dynamic programming:

  • Analyze the problem, find out the rule, disassemble to seek sub-problem.
  • And this rule, either top down or bottom up, is generally how to deal with the relationship between f(n-1) and f(n-2), and express this rule in a way that forms the dp equation. F (n) = F (n-1) * f(n-2)
  • Usually we’re looking for bottom up, if it’s one-dimensional, we’re looking for f(0), f(1); If it’s two-dimensional, it’s all f of 0, y and f of x,0; If it’s three-dimensional, we’re looking for more. In other words, the datum value of the dynamic programming equation should be clarified first, that is, 0 should be clarified, and how many states should be clarified depends on the status of the = sign of the DP equation. The process of determining the value of a reference point is written directly after analyzing the law with the naked eye. The calculations above the base point are recursively derived from the base point. That is, find 0,1, then compute 2 based on 0,1, then compute 3 based on 1,2, and then compute 4 based on 2, 3.
  • Dp is usually represented in dp tables, which are one-dimensional arrays or two-dimensional arrays, to record the process data. A one-dimensional array is the basic value, dp [0] = 1, dp [1] = 2, then dp [2] = dp + dp [1], [0] dp [3] = dp + dp [1] [2]. A two-dimensional array is the basic value dp [0, 0] = 0, dp [0, 1] = 1, dp (1, 0), and then calculate the dp [1] [1] = dp [0] [1] + dp [1] [0], then upward until recursive
  • To solve the

MIT Dynamic Programming shortest Path Algorithms

Seven weeks

Dictionary tree and look-up set

Dictionary Trie tree:

Basic properties:

  • The node itself does not hold the whole word, only the character of the current position;
  • From the root node to a node, the characters passing through the path are connected, which is the string corresponding to the node;
  • The paths of all children of each node represent different characters.

Dictionary tree template:

class Trie { private boolean isEnd; private Trie[] next; /** Initialize your data structure here. */ public Trie() { isEnd = false; next = new Trie[26]; } /** Inserts a word into the trie. */ public void insert(String word) { if (word == null || word.length() == 0) return;  Trie curr = this; char[] words = word.toCharArray(); for (int i = 0; i < words.length; i++) { int n = words[i] - 'a'; if (curr.next[n] == null) curr.next[n] = new Trie(); curr = curr.next[n]; } curr.isEnd = true; } /** Returns if the word is in the trie. */ public boolean search(String word) { Trie node = searchPrefix(word); return node ! = null && node.isEnd; } /** Returns if there is any word in the trie that starts with the given prefix. */ public boolean startsWith(String prefix) { Trie node = searchPrefix(prefix); return node ! = null; } private Trie searchPrefix(String word) { Trie node = this; char[] words = word.toCharArray(); for (int i = 0; i < words.length; i++) { node = node.next[words[i] - 'a']; if (node == null) return null; } return node; }}Copy the code

Note: although the circle of the node in the figure below contains te, ten, to, etc., it does not mean that the node stores these complete words. It is just a hint. The number below the leaf node indicates additional information, such as 15 for frequency of access

The core idea of dictionary tree:

  • Swapping space for time creates a huge extra space, but reduces the insertion and query time complexity to O(k), which is the length of the string, which is O(1).
  • The common prefix of string is used to reduce the cost of query time to improve efficiency.

And look up disjoint set

And look up the core idea of the set metaphor:

  • To determine whether two employees are in the same company, just determine whether the two employees work for the same CEO.

  • Similarly, to determine whether two elements are in the same group, circle, etc., you can find the parent of each group and compare them.

  • If two groups, circles, and companies are merged into one group, circle, and company, there must be only one boss in the new group, circle, and company, either 1 or 2.

Applicable scenarios:

  • Group, pairing problem, whether a person is in a group or not
  • Group group relationship

Basic operation:

  • MakeSet (s) : Creates a new lookup set with s single-element sets.
  • UnionSet (X, Y): Merges the set where element X and element Y are located, requiring that the sets where element X and y are located do not intersect. If they do, the merges will not occur.
  • Find (x) : Finds the representation of the set in which element x is located. This operation can also be used to determine whether two elements are in the same set by comparing their representatives.

And look up the Java implementation of the set:

class UnionFind { private int count = 0; private int[] parent; public UnionFind(int n) { count = n; parent = new int[n]; for (int i = 0; i < n; i++) { parent[i] = i; } } public int find(int p) { while (p ! = parent[p]) { parent[p] = parent[parent[p]]; p = parent[p]; } return p; } public void union(int p, int q) { int rootP = find(p); int rootQ = find(q); if (rootP == rootQ) return; parent[rootP] = rootQ; count--; }}}Copy the code

The primary search

  • Simple search
  • Optimization method: No repetition (Fibonacci), prune (generate parenthesis problem)
  • Search direction:
    • Our lines can be customized according to our requirements. Our lines can be customized according to our requirements. Our lines can be customized according to our requirements

Advanced Search – Pruning

During the search, determine whether the search conditions are met. If the search conditions are not met, it is equivalent to that the search stops for several branches with the node as root, which is equivalent to pruning the branches.Pruning, can be understood as a timely stop loss.

backtracking

  • Backtracking uses the idea of trial and error, which attempts to solve a problem step by step. In the process of step solution, when it tries to find that the existing step solution can not get an effective and correct solution, it will cancel the calculation of the previous step or even the previous steps, and then try to find the answer of the problem again through other possible step solution.
  • Backtracking is usually done using the simplest recursive method, and two things can happen after repeated steps:
    • Find a correct answer that might exist
    • After all possible steps have been tried, the question is declared unanswered
    • In the worst case, backtracking results in an exponential time calculation.

Advanced search – bidirectional BFS

BFS:

Bidirectional BFS, literally, starts BFS in both directions at the same time, with the shortest path being the fastest transition point.

Advanced Search – Heuristic Search (A*)

Heuristic Search (A*) searches with direction

  • Based on BFS
  • In BFS, the normal queue is replaced with a priority queue
  • If it’s a priority queue, then you have to define what the “priority” is, and that’s what the valuation function (or heuristic function) does, right

Heuristic function: h(n), which evaluates which nodes are optimal and hopefully the node we are looking for, h(n) will return a non-negative real number, or you can think of it as the estimated cost of the path to the destination node from node n.

A heuristic function is a way to inform the direction of a search, and it provides an intelligent way to guess which neighbor nodes will lead to a target. In moving forward towards the goal, we should choose which direction to advance first. Code template:

public class AStar { public final static int BAR = 1; Public final static int PATH = 2; Public final static int DIRECT_VALUE = 10; Public final static int OBLIQUE_VALUE = 14; Queue<Node> openList = new PriorityQueue<Node>(); // Priority queue (ascending) List<Node> closeList = new ArrayList<Node>(); Public void start(MapInfo MapInfo) {if(MapInfo ==null) return; // clean openList.clear(); closeList.clear(); Openlist.add (mapinfo.start); moveNodes(mapInfo); } private void moveNodes(MapInfo MapInfo) {while (! openList.isEmpty()) { Node current = openList.poll(); closeList.add(current); addNeighborNodeInOpen(mapInfo,current); if (isCoordInClose(mapInfo.end.coord)) { drawPath(mapInfo.maps, mapInfo.end); break; }}} / * * * in the two-dimensional array mapping path * / private void drawPath (int [] [] maps, Node end) {if (end = = null | | maps = = null) return; System.out.println(" total cost: "+ end.g); while (end ! = null) { Coord c = end.coord; maps[c.y][c.x] = PATH; end = end.parent; Private void addNeighborNodeInOpen(MapInfo MapInfo,Node current) {int x = current.coord.x; int y = current.coord.y; // Left addNeighborNodeInOpen(mapInfo,current, X-1, y, DIRECT_VALUE); AddNeighborNodeInOpen (mapInfo,current, x, y-1, DIRECT_VALUE); // Right addNeighborNodeInOpen(mapInfo,current, x + 1, y, DIRECT_VALUE); AddNeighborNodeInOpen (mapInfo,current, x, y + 1, DIRECT_VALUE); // top left addNeighborNodeInOpen(mapInfo,current, X-1, Y-1, OBLIQUE_VALUE); // addNeighborNodeInOpen(mapInfo,current, x + 1, Y-1, OBLIQUE_VALUE); // right bottom addNeighborNodeInOpen(mapInfo,current, x + 1, y + 1, OBLIQUE_VALUE); // lower left addNeighborNodeInOpen(mapInfo,current, X-1, y + 1, OBLIQUE_VALUE); Private void addNeighborNodeInOpen(MapInfo MapInfo,Node current, int x, int y, int value) { if (canAddNodeToOpen(mapInfo,x, y)) { Node end=mapInfo.end; Coord coord = new Coord(x, y); int G = current.G + value; Nodechild = findNodeInOpen(coord); if (child == null) { int H=calcH(end.coord,coord); If (isEndNode(end.coord,coord)) {child=end; child.parent=current; child.G=G; child.H=H; } else { child = new Node(coord, current, G, H); } openList.add(child); } else if (child.G > G) { child.G = G; child.parent = current; openList.add(child); }}} / * * * * find nodes from the Open list/private Node findNodeInOpen (Coord Coord) {if (Coord = = null | | openList. IsEmpty ()) return null; for (Node node : openList) { if (node.coord.equals(coord)) { return node; } } return null; } /** * calculate the valuation of H: The Manhattan method, */ private int calcH(Coord end,Coord Coord) {return math.abs (end.x-coord.x) + math.abs (end.y-coord.y); Private Boolean isEndNode(Coord end,Coord Coord) {return Coord! = null && end.equals(coord); } /** * private Boolean canAddNodeToOpen(MapInfo MapInfo,int x, Whether int y) {/ / in the map if (x < 0 | | x > = mapInfo. Width | | y < 0 | | y > = mapInfo. Hight) return false. Mapinfo.maps [y][x] == BAR) return false; If (isCoordInClose(x, y)) return false; return true; } / / private Boolean isCoordInClose(Coord Coord) {return Coord! =null&&isCoordInClose(coord.x, coord.y); } / / private Boolean isCoordInClose(int x, int y) {if (closelist.isempty ()) return false; for (Node node : closeList) { if (node.coord.x == x && node.coord.y == y) { return true; } } return false; }}Copy the code

Red black tree and AVL tree

Binary tree traversal:

  • Pre-order: root-left-right
  • In-order: left-root-right
  • Post-order: left-right-root

Binary Search Tree

A Binary search Tree, also known as a Binary search Tree, Ordered Binary Tree, Sorted Binary Tree, refers to an empty Tree or a Binary Tree with the following properties:

  • The value of all nodes in the left subtree is less than the value of its root node;
  • The value of all nodes in the right subtree is greater than that of its root node.
  • And so on: the left and right subtrees are binary search trees respectively. (That’s repetition!)

Middle order traversal: ascending order

Binary search: O(logn)

Extreme case: O(n)

How to make the lookup faster:

  • Two dimensions guaranteed! — > Left and right subtree node balance
  • Try to divide
  • Click to view

How to keep balance? AVL tree

  • Inventors G. M. Adelson-Velsky and Evgenii Landis
  • Balance Factor: is the height of its left subtree minus the height of its right subtree (sometimes the reverse). balance factor = {-1, 0, 1}
  • Balance by rotation operation (four types)
  • Click to view

The balance factor before 14 is inserted:The balance factor after 14 is inserted:Add the balance factor before 3:Balance factor after inserting 3:

For unbalanced binary trees, the balance can be improved by rotation

  • left-handed
  • Right hand
  • Spin around
  • Right left-handed

Left left subtree – right rotation

Right-right subtree – Left-handed

Left and right subtree – left and right turn find the balance factor of -2, and then rotate from the child node, if you have -3 to 4, you rotate from -2

Right left subtree – right left turn to find the equilibrium factor of -2, and then rotate from the child node, if you have -3 to 4, you rotate from -2

AVL summary

  • Balanced binary search tree
  • Balance factor = {-1, 0, 1} 3. Four rotation operations

Disadvantages: Nodes need to store additional information and adjust frequently

Red-black Tree Red-black Tree

Red-black Tree is an approximately balanced Binary Search Tree, which can ensure that the height difference between the left and right subtrees of any node is less than twice. Specifically, a red-black tree is a binary search tree that meets the following conditions:

  • Each node is either red or black
  • The root node is black and each leaf node (NIL node, empty node) is black.
  • You can’t have two red nodes next to each other
  • All paths from any node to each of its leaves contain the same number of black nodes.

Key feature: The longest possible path from root to leaf is no more than twice as long as the shortest possible path.

Conclusion:

  • The binary search tree extreme case is a stick, so the time is O(n).
  • For binary search trees, the time complexity is actually the depth of the tree.
  • Therefore, try to keep the tree evenly distributed, so as to minimize the depth and time complexity.
  • Therefore, AVL is derived, and the depth difference between the left and right sides of the control subtree is no more than 1.
  • But AVL feature, in the insertion and deletion, always keep the balance of binary tree, operation is more troublesome. AVL is strictly balanced, so efficiency is highest when reading a lot, that is, when querying. But insert and delete, because you have to balance all the time, is not very efficient.
  • Therefore, red-black trees appeared, and the balance factor of AVL was adjusted from the difference between left and right to be -1 to the height of left and right less than 2 times, reducing some operations to maintain balance. The time complexity of the query remains logn. So if you do a lot of inserts and deletes, you’re going to use a red-black tree.

8 weeks

An operation

  • An operator
  • Arithmetic shift and logical shift
  • Application of bit operation

Why do we need bitwise operations

  • The way numbers are represented and stored in a machine is binary
  • Decimal < — > binary: How to convert?
  • [Click here](zh.wikihow.com/%E4%BB%8E%E… %E8%BD%AC%E6%8D%A2%E4%B8%BA%E4%BA%8C%E8%BF%9B%E5%88% B6)

Decimal to binary method

Decimal and binary

  • 4(d): 0100
  • 8(d): 0100
  • 5(d): 0101
  • 6(d): 0110

XOR – XOR

Exclusive-or: The same value is 0, and the different value is 1. It can also be understood as “no carry addition”.

Some characteristics of xOR operations:

  • x^0=x
  • X ^1s=~x // Note 1s=~0
  • x ^ (~x) = 1s
  • x^x=0
  • C =a^b => a^c=b,b^c=a // switch two numbers
  • a^b^c=a^(b^c)=(a^b)^c // associative

Focus on the following:

Bloom Filter

Bloom Filter vs Hash Table

  • A long binary vector and a series of random mapping functions. Bloom filters can be used for retrieval

Whether an element is in a set.

  • The advantage is that the space efficiency and query time are far more than the general algorithm,
  • The disadvantage is that there is a certain error recognition rate and deletion difficulty.

Main function: quickly determine whether a XXX such as String does not exist in the hash table. Only check whether there is one. No additional information is stored. In practical use as a quick filter

Example: In the following figure, a hash function yields a value for a string. The value is the key of the hash table. Value is a linked list used to store repeating elements. If more than one string has the same hash value, the linked list will chain the storage string in turn. Therefore, if the hash value of a string is not found in key in the hash table, the string must not exist in the hash table. If the hash value of a string exists in the key of the hash table, then the string may exist in the hash table because multiple string hashes are identical.

Principle and implementation of Bloom filter

Use bloom filter to solve cache breakdown, spam identification, set weight

Bloom filter Java implementation example 1

Bloom filter Java implementation example 2

Practical application:

LRUCache

LRU: least recently used

LRU Cache, which caches only the least recently used data.

  • Two elements: size, replacement strategy
  • Hash Table + Double LinkedList
  • The O (1) the query
  • O(1) Modify and update

LRU Cache — Java

public class LRUCache { private Map<Integer, Integer> map; public LRUCache(int capacity) { map = new LinkedCappedHashMap<>(capacity); } public int get(int key) { if(! map.containsKey(key)) { return -1; } return map.get(key); } public void put(int key, int value) { map.put(key,value); } private static class LinkedCappedHashMap<K,V> extends LinkedHashMap<K,V> { int maximumCapacity; LinkedCappedHashMap(int maximumCapacity) {super(16, 0.75f, true); this.maximumCapacity = maximumCapacity; } protected boolean removeEldestEntry(Map.Entry eldest) { return size() > maximumCapacity; }}}Copy the code

Sorting algorithm

Sorting algorithm

  • Comparison sorting: Determining the relative order of elements by comparison, which cannot be broken due to its time complexity

O(nlogn), so it is also called nonlinear time comparison sort.

  • Non-comparison sort:

Instead of determining the relative order of elements by comparison, it can break through the lower bounds of time based on comparison sort and run in linear time, so it is also called linear time non-comparison sort.

  • Ten classic sorting algorithms
  • Visual animation of 9 classical sorting algorithms
  • Watch 15 sorting algorithms in 6 minutes

Primary sort -o (n^2)

  • Selection Sort finds the minimum value and places it at the beginning of the array to be sorted.
  • Insertion Sort progressively builds ordered sequence from front to back; For unsorted data, scan from back to front in the sorted sequence to find the appropriate location and insert.
  • Bubble Sort is a nested loop that swaps adjacent elements each time it looks in reverse order.

Advanced sort -o (N*LogN)

  • Quick Sort

Take pivot, place the small element to the left and the large element to the right, and then fast-sort the right and right subarrays in turn. So that the whole sequence is ordered.

  • Merge Sort — divide and conquer
    • The input sequence of length N is divided into two subsequences of length N /2. 2. Merge sort is used for these two subsequences respectively;
    • Combine two sorted subsequences into a final sorted sequence.
  • Heap Sort — Heap Sort O(logN), Max/min O(1)
    • The array elements in turn build a small top heap
    • Take the top heap elements in turn and delete them

Merge is similar to quicksort, but the sequence of steps is reversed

  • Merge: Sort the left and right subarrays first, then merge the two ordered subarrays
  • Quicksort: first allocate the left and right subarrays, then sort the left and right subarrays

Special sort -o (n)

  • Counting Sort requires that the input data must be an integer with a certain range. Convert the input data values into keys stored in the extra array space; It then populates the array with counts greater than 1 in turn
  • Bucket Sort

Bucket sort works by dividing the data into a limited number of buckets, assuming that the input data is evenly distributed, and sorting each Bucket separately (it is possible to use another sorting algorithm or to continue sorting using Bucket sort recursively).

  • Radix Sort is Sort by the low order first and then collect; And then sort it in order, and then collect it; So we go all the way up to the highest place. Sometimes some attributes are prioritized, first by low priority, then by high priority.

The ninth week

Advanced dynamic programming

String algorithm

Boyer Moore algorithm

Sunday algorithm

String matching violence code example

Rabin-karp code example

KMP string matching algorithm video

String matching KMP algorithm

Comparison of various data structures and temporal and spatial complexity:

Often used to solve problems

  • Violence cycle
  • Double pointer method, left and right pointer, fast and slow pointer
  • Analyze patterns, find recent repeat subproblems, find repeatability, generalize, recurse, write rules in code