DEVELOPMENT OF ADAPTIVE ALGORITHM FOR SPARSE SYSTEM IDENTIFICATION.

we develop the adaptive algorithm for system identification where the model is sparse. The low-complexity adaptive filtering algorithms are developed which exploit the sparsity of signals and systems are designed. We design and develop the adaptive algorithm which we term least mean square (LMS), normalized least mean square and zero attractors normalized LMS, These algorithms are analysed and applied to the identification of sparse systems. The reweighted ZA-NLMS (RZA-NLMS) are developed to improve the filtering performance. In common sensing, the relaxation is applied to improve the performance of adaptive LMS type filtering. The ZA-LMS is developed by combining the quadratic LMS cost function and norm penalty, which helps to generate a zero attractor in the LMS algorithm. This results in two new algorithms, the Zero-Attracting LMS (ZA-LMS) and the Reweighted Zero-Attracting LMS (RZA-LMS). During

In this paper, we develop the adaptive algorithm for system identification where the model is sparse. The classical configurations of adaptive filtering are system identification, prediction, noise cancellation. The low-complexity adaptive filtering algorithms are developed which exploit the sparsity of signals and systems are designed. We design and develop the adaptive algorithm which we term least mean square (LMS), normalized least mean square and zero attractors normalized LMS, These algorithms are analysed and applied to the identification of sparse systems. The reweighted ZA-NLMS (RZA-NLMS) are developed to improve the filtering performance. In common sensing, the relaxation is applied to improve the performance of adaptive LMS type filtering. The ZA-LMS is developed by combining the quadratic LMS cost function and a norm penalty, which helps to generate a zero attractor in the LMS algorithm. This results in two new algorithms, the Zero-Attracting LMS (ZA-LMS) and the Reweighted Zero-Attracting LMS (RZA-LMS). During the filtering process, this zero attractor proposed sparsity in taps and therefore the speed of convergence increased in the sparse system identification process.

Introduction:-
Adaptive filters are the significant part of signal processing. Filtering is a process of removing noise or some unwanted components from a signal. An adaptive filter is a digital filter with self-learning or self-adjusting characteristics. There are different applications of adaptive filtering like system identification noise cancellation, linear prediction, and adaptive inverse system. Adaptive filters are the significant part of signal processing. Filtering is a process of removing noise or some unwanted components from a signal. An adaptive filter is a digital filter with self-learning or self-adjusting characteristics. There are different applications of adaptive filtering like system identification noise cancellation, linear prediction, and adaptive inverse system. Adaptive filtering algorithms have become a popular tool to cope with those unwanted noises present in the signal. In particular, the least mean square (LMS) and recursive least squares (RLS) algorithms are the most widely known. Indeed, the LMS is quite used due to its computational simplicity, whereas the RLS provides faster convergence. [4] the poor performance can be explained by observing two aspects: (a) slow convergence of the filter taps to their steady-state values since the convergence rate of the algorithm is proportional to the total filter length; (b) high steady-state misadjustment due to the estimation noise ISSN: 2320-5407 Int. J. Adv. Res. 6(5), 1315-1323 1316 that inevitably occurs during the adaptation of the so-called inactive filter taps (i.e., taps with zero or close to zero values at steady state) [5]. Impulse responses of unknown systems can be assumed to be sparse, containing only a few large coefficients interspersed among many negligible ones. Using such sparse prior information can improve the filtering/estimation performance. However, standard LMS filters do not exploit such information. In the past years, many algorithms exploiting sparsity were based on applying a subset selection scheme during the filtering process, which was implemented via statistical detection of active taps or sequential partial updating. Other channel estimation adaptive algorithms are developed as zero-attracting algorithms which is combination of the LMS algorithm and common sensing (CS) theory [8], and this are known as zero attracting LMS (ZA-LMS).Further developed algorithm is reweighted zero attractor (RZA-LMS).The convergence speed of zero attracting LMS can be improved with expanded techniques in APA which are denoted as zero attracting APA (ZA-APA) and reweighted zero attracting APA (RZA-APA).So as a result ,we get convergence speed faster with zero attracting APA than those ZA-LMSs due to improvement in algorithm. These zero attracting algorithms are released by incorporating a norm into the cost functions of the standard LMS and APA, respectively. [6] Furthermore these norm penalty algorithms insists to make the active taps very small as compared to the number of inactive channel taps.

Adaptive Filters:-
The purpose of an adaptive filter is to adjust its parameters according to its output so that we get a meaningful result with no error. The adaptive filter coefficients adjust themselves to achieve the desired result such as identifying an unknown filter or cancelling noise in the input signal. Adaptive filters self-learn. The closed-loop adaptive filter uses feedback in the form of an error signal to refine its transfer function. [1] Usually, adaptive algorithms are known for its approximation, simplicity in calculations they do not require previous knowledge of the signal. Adaptive filters are good for real-time applications when there is no time for statistical estimation.
The applications of the adaptive filter are different in nature, But one common thing about all application is an input vector and desired results are computed an approximate error. So there are four classes of the adaptive filter which are given in the following

.Applications of adaptive filters
System Identification:-System identification is the process of modelling a plant. In this process of implementation of a plant, it involves different steps like experimental planning, selection of a model structure, parameter estimation, and model validation system identification is the process of modelling a plant. [1]. In this, we have an unknown plant which is linear and time-varying. The plant is consists set of discrete-time measurements which describe the changes in plant output to a known input. [9,10] In the field of communication, as a category of adaptive filtering system identification is popular method and also called as mathematical modelling.

1317
System identification involves constructing an estimate of an unknown system given only two signals, namely an input signal and a reference signal. Typically, the unknown system is modelled with a finite impulse response (FIR) [7] filter, and adaptive filtering algorithms are employed to compute an estimate of the response of the unknown system being identified.
FIR filters are implemented with nonrecursive structures. Adaptive FIR filters are the most popular ones due to their stability. The most widely used adaptive FIR filter structure is the transversal filter. The structure of the FIR filter is shown in Fig.
"H" is the Hermitian transpose of a vector or a matrix.
Adaptive filtering algorithms:-Adaptive linear filters are linear dynamical system with adaptive structure and parameters. Adaptive filters have property to adjust their parameters values, i.e. filter changes its input according to their output, in order to generate signal at the output and the output which is generated at the end is desired output which is without undesired components, degradation, noise and interference signal and many more. The main characteristic of the adaptive algorithm is to adjust the parameters of the adaptive filter in such a way to minimize the error signal, which is nothing but the difference between the signal at the output of the adaptive filter and the reference signal. The Least Mean Square (LMS) algorithm, introduced by Widrow and Hoff [1], is a popular method for adaptive system identification. Its applications include echo cancelation, channel equalization, interference cancelation and so forth. 1318

The Least Mean Square (LMS) Algorithm:-
Simplicity is the main advantage of the LMS algorithm due to its simplicity the range of applications increases in the field of signal processing. Which makes the LMS a standard as compare to other linear adaptive algorithms. The cost function of the LMS algorithms is: J (n) = |e (n)| ……………………….. (3.1) Where | · | is the Euclidean norm and the e(n) is the error signal, that is equal to the difference between the desired signal and the filter output signal, e (n) = d(n) − (n)u(n)……………(3.2) Where wˆ (n) is the filter represented by a M-by-1 tap-weight vector and u (n) is the M-by-1 input signal and is the Hermitian transpose of a vector or a matrix.. Then the gradient vector of J (n) can be expressed as: Where R is the correlation matrix of the received signal and p is the cross correlation vector between the received signal and the desired signal. The optimum solution of such a linear filter is known as the Wiener solution that is given by ˆ = p…………………….. (3.4) To estimate the gradient vector the possible solution is applying instantaneous approximation for R and p as follows: R = u (n) (n), p = u (n) d (n)…………. (3.
The error signal e(n) consists of the difference between the desired signal d(n) and the output of the sparse adaptive filter y(n). When the output error e(n) is minimized, the adaptive filter represents a model for the unknown sparse system.

Affine Projection Algorithm (APA)
In adaptive filtering algorithms to increasing the rate of convergence where the input signal is correlated there we applying data-reusing algorithm are used. But this data reusing technique will increase the misadjustment of these algorithms [2]. The well-known method in adaptive filtering applications is the APA and its variations, though it involves complexity and performance intermediary between those of LMS and of RLS Its applications include echo cancellation, channel equalization, interference cancellation, [14-18] and so forth Let us assume that the last N input signal vectors are organized in a M-by-N matrix as follows U(n) = [u(n), u(n − 1), ..., u(n − N + 1)],…………….. (3.19) Where the u(n) denotes the vector of the input signal at time n, and N denotes the APA order. We can also define some vectors representing the filter output y(n), the desired signal d In general, a step-size µ < 1 is used to control convergence and the steady-state behaviour of the APA. The APA is a generalization of the NLMS adaptive filtering algorithm. When the AP order N is set to one, the equation (3.25) will reduce to the familiar NLMS algorithm.

Zero-Attracting Affine Projection Algorithm (ZA-APA)
For conventional APA, we can also apply the same strategy to get a new cost function (n) by combining the instantaneous square error with the -norm penalty of the coefficient vector. The new cost function is shown as below, (n) = ||wˆ (n+1)−wˆ(n)|| 2+Re{[d(n)− (n)wˆ (n+1)]λ}+α ||wˆ (n+1)||1……………….………....(4.1) To minimize the cost function, we can compute the partial derivative of J2(n) with respect to wˆ (n + 1) = wˆ (n + 1) − wˆ (n) − U(n)λ + α sgn[wˆ (n + 1)]. Results are given to show the performance of the proposed sparsity-aware algorithms in stationary scenarios. Firstly we shows the LMS mean square error and normalised LMS for 40db signal. The performance of the ZA-APA and the RZA-APA are compared with that of the standard NLMS, RZA-NLMS and standard APA. Four experiments have been designed to demonstrate their tracking and steady state performance [5]. The parameters are varied in the coding. The mean Square Deviation value for all these algorithm are calculated for the no of iteration. Then the graph is plotted.

Conclusion:-
Thus, in this paper, we proposed adaptive algorithms, affine projection algorithms for sparse system identification. We can also develop zero forcing techniques in further development to improve their performance when the system has a significantly degree of sparsity and obtain more appropriate results in future simulations.