This is the 14th day of my participation in the November Gwen Challenge. Check out the event details: The last Gwen Challenge 2021

There’s no way around slider verification. Let’s do it.

The difficulties in

Slider login specific how to achieve, see a lot of blogs, in fact, the most important two points:

  1. The distance between the background image and the gap
  2. Human slide

Calculation of distance between

Can get the full picture

As for the distance calculation between the background image and the notch, some web pages can get the complete background image, so you can use the following method: find the complete background image and the notch background image, compare the pixels of the two images, and find the distance between the edge and the left-most edge of the notch. Specific can refer to the night bucket small shrine

The full picture was not available

But some sites don’t have full pictures. At this time, we can only compare the gap background image with the gap image to find the distance between the two images. The common method is cv2.matchtemplate. So take a look at how cv2.matchTemplate is implemented.

cv2.matchTemplate

The principle of Cv2. matchTemplate is to compare the gap image with the template image from top to bottom and from left to right to see how high the similarity of the overlapped image is [take the upper left pixel as the unit, obtain the image with the same size as the gap image, conduct pixel comparison] and return the result matrix. Then, the maximum and minimum values and corresponding coordinates in the matrix are obtained through cv2.minMaxLoc. The X-axis of the coordinate is the distance that we want to compute.

The specific process of calculating spacing

  1. Read the current gray image
  2. Cv2. matchTemplate gets the result matrix
  3. Cv2. minMaxLoc gets the maximum coordinates
  4. Geometric scaling

The result of the first three steps is 300, and my own manual measurement is 295, the difference is not big, but a little curious, I took a Rectangle circle to test the position, the result!!

Oh oh, the position is wrong, it seems to improve the accuracy of the result

Image noise elimination

Cv2. GaussianBlur – Smooths the image with average pixel values and adjacent pixel values, eliminating noise.

BlockJpg = cv2.imread(blockJpg, 0) def tran_canny(image): Image = cv2.GaussianBlur(image, (3, 3), 0) return cv2.Canny(image, 50, Res = cv2.matchTemplate(_tran_canny(blockJpg), _tran_canny(templateJpg), "" min_val, max_val, min_loc, max_loc = cv2.minmaxloc (res)Copy the code

Eventually, you can circle the result normally

Then there is the final step: scaling. Because what we get is the original picture, but in fact the picture in the web page has been scaled equally, if we move according to the original distance, it will not match

To simulate the sliding

Simulated sliding is actually imitating common sense human movements. Generally speaking, people experience slow-fast-slow process-acceleration during sliding. So the whole path is divided into five parts

  1. First click – Pause for one second
  2. The first fifth of the distance slightly slower acceleration slide
  3. The middle 3/5 of the distance slides fast
  4. The last 1/5 of the way down
  5. Stop for 0.7s before releasing during calibration

Path calculation

def get_tracks(self, distance): # font = distance * 1/5 # track = [] # track = 0 # distance * 1/5 # < font: # decelerate a = -1 if current < mid: distance*4/5 while current <= distance: if current # accelerate a = 2 else: V # x = v0*t + 1/2^2 s = v0*m+0.5*a*(m**2) current += s tracks. Append (round(s)) v = v0+a*m return tracksCopy the code

sliding

  1. Click not to release
  2. sliding
  3. loosen
Tracks = self. Get_tracks (gap) print(gap) # Element = wait.until(EC.presence_of_element_located((By.XPATH,'//div[@class="secsdk-captcha-drag-icon sc-kEYyzF fiQtnm"]'))) ActionChains(browser).click_and_hold(on_element=element).perform() # pause for 1s time.sleep(1) # change the current distance for track in tracks: # x drag position, ActionChains(browser). Move_by_offset (xoffset=track, Perform () # wait for time.sleep(0.5) ActionChains(browser).release(on_element=element).perform()Copy the code

And yet it was detected