Deriche, Lanser, Shen and Canny filters were used to extract sub-pixel edges.

Subpixel:

The smallest unit of the imaging plane of the array camera is pixel. For example, the pixel spacing of a chip is 5.2 microns. When the camera takes pictures, the continuous images in the physical world are discretized. Each pixel on the image surface represents only the color near it. To what extent is “near”? It’s hard to explain. There is a distance of 5.2 microns between the two pixels, which can be regarded as connected in the macro view, but there is an infinite amount of smaller things between them in the micro view. This smaller thing is called “sub-pixel”. Subpixel should exist, but there is no subtle sensor in the hardware to detect it. So the software approximates it.

Sub-pixel accuracy:

Subpixel precision refers to the subdivision between two adjacent pixels. The input value is usually one half, one third, or one quarter. This means that each pixel will be divided into smaller units to implement interpolation algorithms on these smaller units. For example, choosing 1/4 means that each pixel is counted as four pixels horizontally and vertically.

Subpixel applications:

In machine vision, subpixel is a relatively common concept, in many functions, you can choose whether to use subpixel, and in measurement, such as position, line, circle, etc., will appear subpixel. For example, measure the diameter of a circle as 100.12 pixels. After that 0.12 is subpixel. Because it can be understood from the pixel, the minimum physical unit of the industrial camera is actually the pixel, but the value after the decimal point can be obtained in the machine vision measurement, which is calculated by software. In fact, in real situations, it is not necessarily very accurate. This value is usually more easily reflected in grayscale images than in binary images because the value is only 0,1. So many functions don’t necessarily compute subpixels.

 

Edge detection definition:

Edge detection is a basic tool in graphic image processing, computer vision and machine vision. It is usually used for feature extraction and feature detection, aiming at detecting the edges or discontinuous areas with obvious changes in a digital image. In one-dimensional space, a similar operation is called Step detection. An edge is the boundary between different qu yuan in an image. Usually an edge image is a binary image. The purpose of edge detection is to capture the areas of sharp changes in brightness that are usually the focus of our attention. Areas that are discontinuous twice in an image are usually one of the following:

(1) Image depth discontinuity

(2) The image (gradient) is oriented towards the discontinuity

(3) the image illumination (intensity) is not continuous

(4) Texture changes

Ideally, an edge detector applied to a given image yields a series of continuous curves that represent the boundary of the object. Therefore, the results obtained by the application of edge detection algorithm will greatly reduce the amount of image data, thereby filtering out a lot of information that we do not need, leaving the important structure of the image, the work to be processed is greatly simplified. However, edges extracted from ordinary images are often destroyed by image segmentation, that is to say, the detected curve is usually not continuous, some edge curve segments open, the edge segment will be lost, and there will be some edges that we are not interested in. This requires the accuracy of edge detection algorithm.

As shown below:

 

Edge detection algorithms are based on differential mathematics. Generally, the image is filtered first, and then the threshold segmentation. Since the first-order differential only needs to use a filter to satisfy the requirements, the first-order differential is generally used to read the edge. Because sometimes there is too much noise on the section, affecting the quality of the image, so the image of the section should be filtered. There are two kinds of convolution operations involved: one is the filter convolution calculation for image smoothing; The other is the filter convolution calculation for the derivation of the image.

Due to the, so we generally only differentiate the filter operator and then convolve to get the smoothed image differentiation, so as to obtain the edge.

 

There are three criteria for selecting edge filter:

First, the output SNR generated by the edge filter should be maximized so that the possibility of error detection and missing detection of an edge point can be reduced.

Second, the extracted position variance should be minimized, so that the extracted edge is closer to the real edge;

Third, the distance between the extracted edge positions should be maximized, so that the edge detector returns only one edge for each real edge, thus avoiding multiple responses.

Input image, input edge, filter, filter parameters, lower limit of hysteresis threshold, upper limit of hysteresis threshold

 

Hysteresis: to say simply is delay, lag behind; The relative lag between one phenomenon and another closely related phenomenon, especially when physical effects are not followed in time;

 

Description of hysteresis threshold:

Using edge filtering, the edges obtained are more than one pixel contour, so the image obtained should be skeletonized, so as to get a relatively clear edge contour. Sometimes non-maximum suppression is also required

In this way, clear edges can be obtained by threshold segmentation of edge amplitude, skeletoning of the segmented region, and then non-maximum suppression. However, sometimes when we choose a high threshold to ensure that only relevant edges are selected, edges are usually split into noggin segments. On the other hand, if a low threshold is selected to ensure that the edges will not break into segments, many irrelevant edges will be included in our final segmentation results. In view of this situation, Canny proposed a special threshold segmentation algorithm to segment edges: hysteresis threshold segmentation.

Lag threshold segmentation uses two thresholds —- high threshold and low threshold. Points whose edge amplitude is greater than the high threshold are immediately accepted as safe edge points. Those points whose edge amplitude is less than the low threshold are eliminated immediately. Those points with an edge amplitude between the high threshold and the low threshold are treated as edge points only if they can be connected to the security edge points by a certain path. All points that make up this path have an edge amplitude greater than the low threshold. We can also understand this process as first all edge points whose edge amplitude is greater than the high threshold, and then as long as possible as the edge amplitude is greater than the low threshold.

 

 

 

Description:

Edges_sub_pix can use recursive filters such as Deriche, Lanser, and Shen; Or use traditional filters such as “Gaussian derivative” filters (using filter masks) to detect edges, so edges_sub_PIx filters (option of parameter filter) can have the following:

Get_contour_attrib_xld: Get_contour_attrib_xld: get_contour_attrib_xld

Input contour, attribute to be obtained, attribute value, attribute to be obtained can be the following:

It is important to note that not every filter has all of the above attributes:

Except for ‘sobel_fast’, all have only three attributes:

‘Angle’ point of view

‘edge_direction’ direction

‘the response amplitude