A list,

Hough Transform is a feature extraction technique in image processing. It detects objects with specific shapes through a voting algorithm. The process computs the local maximum value of the cumulative results in a parameter space to obtain a set that conforms to this particular shape as the Result of the Hough transform. Hough transform was first proposed by Paul Hough in 1962 [53], and then popularized and used by Richard Duda and Peter Hart in 1972 [54]. Classical Hough transform was used to detect straight lines in images, and Hough transform was later extended to the recognition of objects with arbitrary shapes, mostly circles and ellipses.

Hough transform using the transformation between two coordinates to space will have the same shape in a space curve or straight line map to another space coordinates of a point on the peak, it is transformed into the problem of detection of any shape statistics peak, in the previous section has been introduced in the driveway of linear features, this section introduces the principle of hough transform to detect linear and test results.

As we know, a line can be represented by y=kx+b in cartesian coordinate system. The main idea of Hough transformation is to exchange the parameters and variables of the equation, that is, x and y as known quantities k and B as variable coordinates, so the line y=kx+b in cartesian coordinate system is represented as point (k,b) in parameter space. And a point (x1,y1) is represented in rectangular coordinates as a line y1=x1 times k plus b, where (k,b) is any point on the line. For the convenience of calculation, we express the coordinates of the parameter space as γ and θ in polar coordinates. Since the points on the same line correspond to (γ,θ) are the same, the edge detection of the image can be carried out first, and then every non-zero pixel point on the image can be transformed into a straight line in parameter coordinates. Then points belonging to the same line in cartesian coordinates can form multiple lines in parameter space and intersect at a point. Therefore, this principle can be used for linear detection.



As shown in the figure, for any point (x,y) in the original figure, a straight line can be formed in the parameter space. Taking a straight line in the figure as an example, there is a parameter (γ,θ)=(69.641,30°). All points belonging to the same straight line intersect at a point in the parameter space, which is the parameter of the corresponding straight line. The (γ,θ) obtained from all the lines in the figure yields a series of corresponding curves in the parameter space as shown in the Tuhof statistical transformation results.

Ii. Source code

clear all;
close all;
clc;

img= imread('3.jpg'); img= rgb2gray(img); %% normalized processing figure(1); imshow(mat2gray(img)); hold on; [M, N] = size(img); %% tilt correction and perspective conversion dot=[120.40;401.73;69.309;339.395]; % take four points, top left, top right, bottom left, bottom right, here I'm taking four corners plot(dot(:,1),dot(:,2),The '*'); title('Grayscale original image and its tilt correction feature points');
w=round(sqrt((dot(1.1)-dot(2.1)) ^2+(dot(1.2)-dot(2.2)) ^2)); % Get new rectangle width from original quadrilateral h=round(sqrt((dot(1.1)-dot(3.1)) ^2+(dot(1.2)-dot(3.2)) ^2)); % from the original quadrilateral to obtain the new rectangle height round y=[dot(1.1) dot(2.1) dot(3.1) dot(4.1)]; % x=[dot(1.2) dot(2.2) dot(3.2) dot(4.2)]; % here is the new vertex, the rectangle THAT I took, and I can make it any other shape % as big as I can make it look like a rectangle, the new image is any quadrilateral of points that I took from dot Y=[dot()1.1) dot(1.1) dot(1.1)+h dot(1.1)+h];     
X=[dot(1.2) dot(1.2)+w dot(1.2) dot(1.2)+w];

B=[X(1) Y(1) X(2) Y(2) X(3) Y(3) X(4) Y(4)]'; % The four vertices transformed, the value on the right side of the equation % solve the equations simultaneously, Coefficient equation A = [x (1) y (1) 1 0 0 0 x x (1) - (1) * x (1) * y (1); 0 0 0 x (1) y (1) 1 - (1) * y x (1) - y (1) * y (1), x (2) y (2) 1 0 0 0 x (2) * x (2) -X(2)*y(2); 0 0 0 x(2) y(2) 1 -Y(2)*x(2) -Y(2)*y(2); x(3) y(3) 1 0 0 0 -X(3)*x(3) -X(3)*y(3); 0 0 0 x(3) y(3) 1 -Y(3)*x(3) -Y(3)*y(3); x(4) y(4) 1 0 0 0 -X(4)*x(4) -X(4)*y(4); 0 0 0 x(4) y(4) 1 -Y(4)*x(4) -Y(4)*y(4)]; fa=inv(A)*B; The solution of the equation obtained by four points is also the inverse operation of the global transformation coefficient inv a= FA (1); b=fa(2); c=fa(3); d=fa(4); e=fa(5); f=fa(6); g=fa(7); h=fa(8); rot=[d e f; a b c; g h 1]; Pix1 =rot*[1 1 1]' pix1=rot*[1 1 1]'/(g*1+h*1+1); % pix2=rot*[1 N 1]'/(g*1+h*N+1); Pix3 =rot*[M 1 1]'/(g*M+h*1+1); % pix4=rot*[M N1]'/(g*M+h*N+1); Pix1 (1) pix3(1) pix4(1)])-min([pix1(1) pix2(1) pix3(1) pix4(1)]))); Round (Max ([pix1(2) pix2(2) pix3(2) pix4(2)])-min([pix1(2) pix2(2) pix3(2) pix4(2)]))); Imgn =zeros(height,width); img_mask=zeros(height,width); if min([pix1(1) pix2(1) pix3(1) pix4(1)]) >= 0 delta_y = -round(abs(min([pix1(1) pix2(1) pix3(1) pix4(1)]))); else delta_y = round(abs(min([pix1(1) pix2(1) pix3(1) pix4(1)]))); % get the offset of the negative axis in the y direction end; if min([pix1(2) pix2(2) pix3(2) pix4(2)]) >= 0 delta_x = -round(abs(min([pix1(2) pix2(2) pix3(2) pix4(2)]))); else delta_x = round(abs(min([pix1(2) pix2(2) pix3(2) pix4(2)]))); % get the offset of the negative axis in the x direction end; inv_rot=inv(rot); For j = 1-delta_x:width-delta_x pix=inv_rot*[I j 1]' for j = 1-delta_x:width-delta_x pix=inv_rot*[I j 1]'; % Find the coordinates in the original image, because [YW XW W]= FA *[y x1], so this is [YW XW W],W=gy+hx+1;
        pix=inv([g*pix(1)- 1 h*pix(1); g*pix(2) h*pix(2)- 1])*[-pix(1) -pix(2)]'; [pix % of solution (1) * (hx gy + + 1) pix (2) * (hx gy + + 1)] = [x] y, such an equation, y, and x, finally pix = (x, y); If pix (1) > = 0.5 && pix (2) > = 0.5 && pix (1) < = M && pix (2) < = N imgn (I + delta_y, j + delta_x) = img (round (pix (1)), round (pix (2))); % nearest interpolation, also can be bilinear or bicubic interpolation img_mask(I +delta_y,j+delta_x)=1; Display and save image figure(2) in file format; imshow(uint8(imgn)); title('Image after tilt correction');
print(gcf,'-djpeg'.'abc.jpeg'); hold on; Clearvars -except imgn img_mask I = uint8(imgn); hgram =225: - 1 :15;
I = histeq(I, hgram);
figure(3), imshow(I, []); title('Histogram specification');
hold on;

rotI= medfilt2(I,[3 3]); %3Median filtering figure of X3 matrix (4), imshow(rotI, []); title('Median filtering'); hold on ; BW0 = edge(rotI,'canny'.0.8);
figure(5), imshow(BW0); title('Canny operator edge detection');
hold on ;

se1=strel('disk'.50); % creates a radius of50Img_mask = imerode(IMg_mask, SE1); BW = BW0&img_mask; % filter boundary %% Hough transform detection line [H,T,R] = Hough (BW); figure(6),imshow(imadjust(mat2gray(H)),[],'XData',T,'YData',R,'InitialMagnification'.'fit');
xlabel('\theta (degrees)'), ylabel('\rho');
axis on, axis normal, hold on;
colormap(hot)
P  = houghpeaks(H,5.'threshold'.ceil(0.3*max(H(:))));
x = T(P(:,2)); y = R(P(:,1));
plot(x,y,'s'.'color'.'black');
lines = houghlines(BW,T,R,P,'FillGap'.52.'MinLength'.35);
figure(7), imshow(uint8(imgn)),title('Line extracted by Hough transform'); Hold on % extracts pointer linefor  k = 1: length(lines)
   xy = [lines(k).point1; lines(k).point2];
   plot(xy(:,1),xy(:,2),'LineWidth'.2.'Color'.'green');
 
   % Plot beginnings and ends of lines
   plot(xy(1.1),xy(1.2),'x'.'LineWidth'.2.'Color'.'yellow');
   plot(xy(2.1),xy(2.2),'x'.'LineWidth'.2.'Color'.'red'); End % % readingsCopy the code

3. Operation results















Fourth, note

Version: 2014 a