A list,

Low density check code (LDPC code) is a kind of forward error correction code. LDPC code was first proposed by Gallager in his doctoral thesis in the 1960s, but due to the technical conditions at that time, there was no feasible decoding algorithm, and it was largely ignored in the following 35 years. Meanwhile, Tanner promoted LDPC code in 1981 and gave the graph representation of LDPC code, which was later called Tanner diagram. In 1993, Berrou et al. discovered Turbo codes. On this basis, MacKay and Neal et al. re-studied LDPC codes around 1995 and proposed a feasible decoding algorithm, thus further discovering the good performance of LDPC codes, which quickly aroused strong response and great attention. After more than ten years of research and development, researchers have made breakthrough progress in all aspects, THE related technology of LDPC code is becoming increasingly mature, and even has begun to have commercial application results, and entered the standard of wireless communication and other related fields.

1 Features of LDPC codes

LDPC code is a block code whose check matrix contains only a few non-zero elements. It is the sparsity of the checksum matrix that ensures that the decoding complexity and the minimum code spacing increase linearly with the code length. The code itself is no different from any other block code except that the check matrix is a sparse matrix. In fact, if the existing block code can be expressed by sparse matrix, then the iterative decoding algorithm used for code can also be successfully transplanted to it. In general, however, it is not practical to find a sparse matrix for an existing block code. The difference is that the code design starts with the construction of a checksum matrix, and then determines a generation matrix for subsequent encoding. The coding of LDPC is the main content to be discussed in this paper.

The decoding method is the biggest difference between LDPC code and classical block code. Classical block codes are usually decoded by ML class decoding algorithm, so they are generally small in code length, and algebraic design to reduce the complexity of decoding. However, LDPC code is long and iteratively decodes through the image representation of its checksum matrix H, so its design takes the characteristics of the checksum matrix as one of the core considerations.

1.2 LDPC code construction

The construction of binary LDPC code is actually to find a sparse matrix H as the check matrix of the code. The basic method is to replace a small part of the elements of an all-zero matrix with 1, so that the rows and columns of the matrix after replacement have the required number of non-zero elements. In order to make the constructed code usable, several conditions must be met, namely, no short ring, no low code weight code word, and the minimum distance between codes should be as large as possible.

1.3 Tanner graph

LDPC codes are often represented by graphs, and the Tanner diagram represents the check matrix of LDPC codes. The Tanner graph contains two types of vertices :n codeword bit vertices (called bit nodes) corresponding to the columns of the check matrix and M check equation vertices (called check nodes) corresponding to the rows of the check matrix. Each row of the check matrix represents a check equation, and each column represents a codeword bit. Therefore, if a codeword bit is included in the corresponding check equation, a line connects the involved bit nodes to the check nodes, so that the number of lines in Tanner’s diagram is the same as the number of ones in the check matrix. The following figure is a matrix



In the Tanner diagram, the bit nodes are represented by circular nodes, the check nodes are represented by square nodes, and the black line shows a 6 cycle:



A loop in a Tanner graph consists of a group of connected vertices in the graph. The loop starts and ends with one of these vertices and passes through each vertex only once. The length of a loop is defined as the number of wires it contains, and the circumference of the graph, also known as the size of the graph, is defined as the minimum length of the loop in the graph. As shown in the figure above, the dimension of the graph, namely the circumference is 6, as shown by adding black lines.

1.4 LDPC encoding

Method one: H = (A | B), available for gaussian elimination H H = [I | P], code complete, the code word for u = | s [c], including c for supervision, s for bits of information. Because HU = u ‘H prime is equal to 0, you get IC + PS prime = 0 is IC ‘= PS ‘(in GF(2)), so c’ = P*s’. If a column exchange occurs during Gaussian elimination, it is only necessary to record the column exchange and do the same for the encoded code word in reverse order. When decoding and u first, and then to the column exchange for u1 = | s [c], the back part of information is wanted.





1.5 Construction of LDPC code H matrix

Regular LDPC code and irregular LDPC code If the check matrix has the same number of non-zero elements in each row and the same number of non-zero elements in each column, such LDPC code is called regular code. By contrast, if the check matrix has different numbers of non-zero elements in each row or different numbers of non-zero elements in each column, The LDPC code in this case is called the irregular LDPC code.

1.5.1 Irregular LDPC Code

Construction method: the bit rate R=M/N=0.5, the constructed H-matrix column weight is fixed, but the row weight is random. The following is illustrated by an H matrix of size 6*12 and column weight 2. Because the column is fixed to 2, the set of column coordinates is [1,1,2,2,3,3,4,4… 12,12], and the row coordinates can be constructed randomly.









1.5.2 LDPC Code of rule

For the regular LDPC code with bit rate R=0.5, the row weight is related to the column weight, and the row weight is two times of the column weight. Take the H-matrix of 612 as an example, assume that its column weight is 2, and the number of 1s in each column is 2. The total number of 1s is 212=24. The H-matrix has 6 rows, and if the row weight is the same, the number of 1s in each row is 24/6=4, that is, the row weight is 4. Let’s say its column weight is 3 and its row weight is 6.

Construction method: similar to the construction of row weight, after row weight is fixed, its coordinate set of position in the matrix is fixed. For example, the column weight is 2, the row weight is 4, and the coordinate set of column is [1,1,2,2,3,3… , the row coordinate set is [1,1,1,1,2,2,2,2, 2,3,3,3,3, 3… The next thing to consider is how to combine row and column coordinates without repeating them. The idea here is to keep the row coordinate set constant, and the column coordinate set scrambled and rearranged. Take constructing H matrix with size 5*10, column weight 2 and row weight 4 as an example.

1.6 LDPC decoding

When Gallager describes LDPC code, he puts forward two kinds of hard decision decoding algorithm and soft decision decoding algorithm respectively. Through continuous development, today’s hard decision algorithms have made much progress on the basis of Gallager algorithm, including many weighted bit-flipping decoding algorithms and their improved forms. Both hard and soft judgments have their advantages and disadvantages and can be applied to different applications.

1.6.1 Bit Flipping Algorithm (BF)

The hard decision decoding algorithm is a supplement to the soft decision algorithm of LDPC code proposed by Gallager. The basic assumption of hard decision decoding is that when the check equation is not standing, it means that some bits must be wrong, and the bits that do not satisfy the most check equations have the highest probability of error. At each iteration, the bit with the highest error probability is flipped and the updated code word is re-decoded. The specific steps are as follows:

Ii. Source code

close all; clc; Clear all warning off %----------------- Read"Hidden Pictures"---------------------
I=imread('W.bmp'); % -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- to read"Carrier image"-------------------------
cover_image=imread('lena.bmp'); %------------------------------------------------------------------ I0=rgb2gray(I); % grayscale cover_image= rGB2gray (cover_image); % grayscale [wm0,watermarked_image,wm]=ldpc_dct(I0,cover_image); % grayscale [wm0,watermarked_image, WM]= LDpc_dct (I0,cover_image); % LDPC_DCT embedded extraction e= Wm0-WM; [m,n]=size(e); mse=sum((e(:).^2))/(m*n);
psnr=10*log10(255^2/mse); % Peak signal-to-noise ratio of original watermark and extracted watermark % disp(['Peak signal-to-noise ratio of LDPC improved DCT watermark extraction PSNR =',num2str(psnr)])
figure(1)
subplot(221)
imshow(cover_image);
title('original');
subplot(222);
imshow(I0);
title('Watermark');
title('Watermark'); % display the embedded image subplot(223);
uint8_watermarked_image=uint8(watermarked_image);
imshow(uint8_watermarked_image)
title('Embedded watermark image after improved LDPC encoding decoding')  
subplot(224);
imshow(double(wm));
title('Extracting watermark image after improved LDPC coding decoding') %% cut attack; % Clipping graph I_jianqie(20:30.20:40) =256; [wm_jianqie0,watermarked_image_jianqie,wm_jianqie]=ldpc_dct(I_jianqie,cover_image); % LDPC_DCT embedding extraction e_JIANqie = WM_Jianqie0-wM_Jianqie; [m,n]=size(e_jianqie); mse_jianqie=sum((e_jianqie(:).^2))/(m*n);
psnr_jianqie=10*log10(255^2/mse_jianqie); % Peak signal-to-noise ratio of original watermark and extracted watermark % disp(['PSNR after attack =',num2str(psnr_jianqie)])
figure(2)
subplot(221)
imshow(cover_image);
title('original');
subplot(222);
imshow(I0);
title('Watermark'); % display the embedded image subplot(223);
uint8_watermarked_image_jianqie=uint8(watermarked_image_jianqie);
imshow(uint8_watermarked_image_jianqie)
title('Insert watermark image after clipping attack')  
subplot(224);
imshow(double(wm_jianqie));
title('Extract watermark image after clipping attack'I_gaosi=imnoise(I0,'gaussian'.0.0.01); % gauss noise [wM_gaosi0,watermarked_image_gaosi, wM_gaosi]= ldPC_dCT (I_gaosi,cover_image); % LDPC_DCT embedding extraction e_GAOSI = WM_gaosi0-WM_GaOSI; [m,n]=size(e_gaosi); mse_gaosi=sum((e_gaosi(:).^2))/(m*n);
psnr_gaosi=10*log10(255^2/mse_gaosi); % Peak signal-to-noise ratio of original watermark and extracted watermark % disp(['PSNR after Gaussian noise attack =',num2str(psnr_gaosi)])
figure(3)
subplot(221)
imshow(cover_image);
title('original');
subplot(222);
imshow(I0);
title('Watermark'); % display the embedded image subplot(223);
uint8_watermarked_image_gaosi=uint8(watermarked_image_gaosi);
imshow(uint8_watermarked_image_gaosi)
title('Embedded watermark image after Gaussian noise attack')  
subplot(224);
imshow(double(wm_gaosi));
title('Extracting watermark image after Gaussian Noise Attack') %% Spin attack %%9.rotate 45Rotating I_xuanzhuan = imrotate (I0,45.'bilinear'.'crop'); % rotation45Degrees [wm_xuanzhuan0, watermarked_image_xuanzhuan wm_xuanzhuan] = ldpc_dct (I_gaosi cover_image); % LDPC_DCT embedding extraction e_Xuanzhuan = WM_XuanZHUan0-WM_Xuanzhuan; [m,n]=size(e_xuanzhuan); mse_xuanzhuan=sum((e_xuanzhuan(:).^2))/(m*n);
psnr_xuanzhuan=10*log10(255^2/mse_xuanzhuan); % Peak signal-to-noise ratio of original watermark and extracted watermark % disp(['Peak SNR after rotating attack PSNR =',num2str(psnr_xuanzhuan)])
figure(4)
subplot(221)
imshow(cover_image);
title('original');
subplot(222);
imshow(I0);
title('Watermark');
function H = makeLdpc(M, N, method, noCycle, onePerCol)
% Create R = 1/2 low density parity check matrix
%
%  M        : Number of row
%  N        : Number of column
%  method   : Method for distributing non-zero element
%             {0} Evencol : For each column, place 1s uniformly at random
%             {1} Evenboth: For each column and row, place 1s uniformly at random
%  noCyle   : Length4 - cycle
%             {0} Ignore (do nothing)
%             {1} Eliminate
%  onePerCol: Number of ones per column
%
%  H        : Low density parity check matrix                   
%
%
% Copyright Bagawan S. Nugroho, 2007 
% http://bsnugroho.googlepages.com


% Number of ones per row (N/M ratio must be 2)
if N/M ~= 2
   fprintf('Code rate must be 1/2\n');
end
onePerRow = (N/M)*onePerCol;

% fprintf('Creating LDPC matrix... \n');

switch method
   % Evencol
   case {0}
      % Distribute 1s uniformly at random within column
      for i = 1:N
         onesInCol(:, i) = randperm(M)'; end % Create non zero elements (1s) index r = reshape(onesInCol(1:onePerCol, :), N*onePerCol, 1); tmp = repmat([1:N], onePerCol, 1); c = reshape(tmp, N*onePerCol, 1); % Create sparse matrix H H = full(sparse(r, c, 1, M, N)); % Evenboth case {1} % Distribute 1s uniformly at random within column for i = 1:N onesInCol(:, i) = randperm(M)';
      end
        
      % Create non zero elements (1s) index
      r = reshape(onesInCol(1:onePerCol, :), N*onePerCol, 1);
      tmp = repmat([1:N], onePerCol, 1);
      c = reshape(tmp, N*onePerCol, 1);
     
      % Make the number of 1s between rows as uniform as possible     
      
      % Order row index
      [r, ix] = sort(r);
      
      % Order column index based on row index
      for i = 1:N*onePerCol
         cSort(i, :) = c(ix(i));
      end
      
      % Create new row index with uniform weight
      tmp = repmat([1:M], onePerRow, 1);
      r = reshape(tmp, N*onePerCol, 1);
      
      % Create sparse matrix H
      % Remove any duplicate non zero elements index using logical AND
      S = and(sparse(r, cSort, 1, M, N), ones(M, N));
      H = full(S);     
      
end % switch

% Check rows that have no 1 or only have one 1
for i = 1:M
   
   n = randperm(N);
   % Add two 1s if row has no 1
   if length(find(r == i)) = =0
      H(i, n(1=))1;
      H(i, n(2=))1;
   % Add one 1 if row has only one 1   
   elseif length(find(r == i)) = =1
      H(i, n(1=))1;
   end

end % for i

% If desired, eliminate any length4 - cycle
if noCycle == 1
   
   for i = 1:M
      % Look for pair of row - column
      for j = (i + 1):M         
         w = and(H(i, :), H(j, :));
         c1 = find(w);
         lc = length(c1);
         if lc > 1
                       
            % If found, flip one 1 to 0 in the row with less number of 1s
            if length(find(H(i, :))) < length(find(H(j, :)))
               % Repeat the process until only one column left 
               for cc = 1:lc - 1
                  H(j, c1(cc)) = 0;
               end
Copy the code

3. Operation results



Fourth, note

Version: 2014 a