A list,

Least Mean Squares (LMS) is the most basic adaptive filtering algorithm.

LMS algorithm is a commonly used algorithm in adaptive filter. Different from Wiener algorithm, the coefficient of the system changes with the input sequence. Wiener algorithm intercepts a segment of the autocorrelation function of the input sequence to construct the optimal coefficient of the system. LMS algorithm is implemented by modifying the initial filter coefficients according to the minimum mean square error criterion. Therefore, theoretically speaking, the performance of LMS algorithm is better than wiener under the same conditions. However, LMS is gradually adjusted under the initial value, so there will be a period of adjustment before the system is stable. The adjustment time is controlled by the step factor. Within a certain range, the larger the step factor, the smaller the adjustment time, and the maximum value of step factor is the trace of R. LMS adopts the principle of minimum square error instead of the principle of minimum mean square error. The basic signal relationship is as follows:



Ii. Source code

% % -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- to define variables: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- % % % N# of elements in array %% D - array spacing % % ANG - Theta in DEg % % thetaS - Expected user Angle of arrival % % thetaI - Interference Angle of arrival % % T - desired signal period % % T - Desired signal time axis % % S - desired signal % % I - interfering signal % % vS,vI - Transfer vector % % X - total array factor % % Rxx - Total received signal correlation matrix % % MU - convergence parameter % % W - Weight of uniform linear array determined by LMS algorithm % % X - Total received signal % % Y-array output % % error between E-array output and desired signal % % theta-range of AOA'S (rad) AF - weighted array output % % % % % -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- % % % % -- -- -- -- -- assignment -- -- -- -- -- % % N =8; d =0.5; 

thetaS = 30*pi/180; thetaI = - 60*pi/180; % Expected signal direction30.interference- 60.%% %----- Expected and interfering signal -----%% T =1E-3; t = (1:100)*T/100; it = 1:100; %1E-3said1ms%

S = cos(2*pi*t/T); %, at which point S is already a matrix!1line100Column I = randn (1.100); % to generate1line100Normal distribution of random matrix % % % -- -- -- -- -- linear array signal for each user to create the array factor of -- -- -- -- -- % % vS = []; vI = []; i =1:N;
vS = exp(1j*(i- 1) *2*pi*d*sin(thetaS)). '; % prime, Hermitian transpose, produces N rows1Column array vector, and then multiply the signal on this basis to obtain the signal vI = received by each array element of the antennaexp(1j*(i- 1) *2*pi*d*sin(thetaI)). '; %%----- LMS solution weight -----%% w = zeros(N,1); % Initial antenna array weights are all0%
X = vS + vI;
Rxx = X*X'; %X'Is the Hermitian transpose of X % mu equals1/ (4*abs(trace(Rxx))); %trace % wi = zeros(N, Max (it)); % initial weights are0%
for n = 1:length(S)%n is the number of iterations? % x = S(n)*vS + I(n)*vI; y(n) = w'*x;
    e = conj(S(n)) - y(n);     esave(n) = abs(e)^2; %conj, find the conjugate % nu=mu*(1-exp(- (abs(e))^2)); w = w + nu*conj(e)*x; wi(:,n) = w; End % w = w/w(1); % first weight canonical solution %% %----- display weight -----%% disp(' ')
disp('% -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- %')
disp(' ')
disp([' N = ',num2str(N),The weight of 'is :'])
disp(' ')
for m = 1:length(w)
    disp([' w',num2str(m),'=',num2str(w(m),3)])
end
disp(' ') %%----- Command output -----%% %%1.) Plot Weight and iteration times WI = WI. ';
figure(1), plot(it,abs(wi(:,1)),'kx',it,abs(wi(:,2))... .'ko',it,abs(wi(:,3)),'ks',it,abs(wi(:,4)),'k+'. it,abs(wi(:,5)),'kd'.'markersize'.2)
xlabel('Iteration no.'), ylabel('|weights|')
title('\bf array weight ')

% 2.Signal acquisition and tracking figure(2)
plot(it,S,'k',it,real(y),'k--')
xlabel('Number of iterations'), ylabel('Signals')
title('\ BF expected signal acquisition and tracking ')
legend('Expected signal'.'Array output')

% 3.Minimum mean square error figure(3), plot(it,esave,'k')
xlabel('Iteration no.'), ylabel('|e|^2')
title('\bf mean square error and number of iterations.')

% 4.Plot Array factor theta = -pi/2:.01:pi/2;
AF = 0;
for i = 1:N
    AF = AF + w(i)'.*exp(j*(i-1)*2*pi*d*sin(theta));
end
Copy the code

3. Operation results







Fourth, note

Version: 2014 a