This is the 24th day of my participation in the August Text Challenge.More challenges in August

⚽ algorithm problem

Given an array of positive integer candidates with no duplicate elements and a positive integer target, find all unique combinations of the candidates that can make the sum the target number.

The numbers in the candidates can be selected repeatedly without limit. Both combinations are unique if at least one of the selected numbers has a different number.

For a given input, there are fewer than 150 unique combinations of guaranteed and targets.

The sample1: Enter: candidates = [2.3.6.7], target = 7Output: [[7], [2.2.3]]
Copy the code
The sample2: Enter: candidates = [2.3.5], target = 8Output: [[2.2.2.2], [2.3.3], [3.5]]
Copy the code
The sample3: Enter: candidates = [2], target = 1Output: []Copy the code
The sample4: Enter: candidates = [1], target = 1Output: [[1]]
Copy the code
The sample5: Enter: candidates = [1], target = 2Output: [[1.1]]
Copy the code
Tip:1 <= candidates.length <= 30
1 <= candidates[i] <= 200Each element in a candidate is unique.1 <= target <= 500
Copy the code

⚽ A little thought

This problem always feels deja vu feeling, what about you? There is a simple idea, it does not need to get the same value as the sum of the target, I will add two numbers and target comparison bai small forward to find a larger number to add, not three numbers. In plain English, it’s recursion. All right, we’ll do it when we have an idea.

⚽ source code and solution

Now, the implementation is starting to get a little bit harder, and we’ve only talked about linked lists and stacks in terms of data structures. Today we’re involved in depth-first traversal of binary trees which is the essence of search. If you don’t know how to leave a message or a private message, I will find resources for you to explain.

class Solution {
	public List<List<Integer>> combinationSum(int[] candidates, int target) {
        List<List<Integer>> ans = new ArrayList<List<Integer>>();
        List<Integer> combine = new ArrayList<Integer>();
        dfs(candidates, target, ans, combine, 0);
        return ans;
    }
// The depth-first method below is used to search for the matching number combination target2 which is used to record how much is left to arrive
// Target,arr for results, Combine for arrays being combined
// Temp is a pointer to a position
    public void dfs(int[] candidates, int target2, List<List<Integer>> arr, List<Integer> combine, int temp) {
    	// It is emphasized many times that recursion must know its end condition
        if (temp == candidates.length) {
            return;
        }
        if (target2 == 0) {
            arr.add(new ArrayList<Integer>(combine));
            return;
        }
        // To skip directly is equivalent to moving temp to search for the next location
        dfs(candidates, target2, arr, combine, temp + 1);
        // Select the current number
        if (target2 - candidates[temp] >= 0) {
            combine.add(candidates[temp]);
            dfs(candidates, target2 - candidates[temp], arr, combine, temp);
            combine.remove(combine.size() - 1); }}}Copy the code

⚽ interview questions

Why is Kafka so efficient?

Kafka’s I/O efficiency is high because kafka’s I/O efficiency is high because Kafka’s I/O efficiency is high because Kafka’s I/O efficiency is high because Kafka’s I/O efficiency is high because Kafka’s I/O efficiency is high because Kafka’s I/O efficiency is high because Kafka’s I/O efficiency is high

(2) The second is that Kafka reads data based on SendFile to implement Zero Copy

The traditional data reading process is as follows: Implement Zero Copy based on SendFile call read function, file data is copied to the kernel buffer read function return, file data is copied from the kernel buffer to the user buffer write function call, file data is copied from the user buffer to the kernel socket related buffer. Data is copied from the socket buffer to the relevant protocol engine.

But kafka reads like this: Sendfile system call, file data is copied to the kernel buffer from the kernel buffer to the socket buffer in the kernel and then the socket buffer is copied to the protocol engine (3). The second is kafka data compression. Kafka uses batch compression. The third is that kafka producers use batch sending and dual threading to produce messages. In fact, they use a dual thread, the main thread and the Sender thread