Movatterモバイル変換


[0]ホーム

URL:


Next Article in Journal
Gravity Theories with Background Fields and Spacetime Symmetry Breaking
Previous Article in Journal
Decomposition and Intersection of Two Fuzzy Numbers for Fuzzy Preference Relations
 
 
Search for Articles:
Title / Keyword
Author / Affiliation / Email
Journal
Article Type
 
 
Section
Special Issue
Volume
Issue
Number
Page
 
Logical OperatorOperator
Search Text
Search Type
 
add_circle_outline
remove_circle_outline
 
 
Journals
Symmetry
Volume 9
Issue 10
10.3390/sym9100229
Font Type:
ArialGeorgiaVerdana
Font Size:
AaAaAa
Line Spacing:
Column Width:
Background:
Article

A General Zero Attraction Proportionate Normalized Maximum Correntropy Criterion Algorithm for Sparse System Identification

1
College of Information and Communication Engineering, Harbin Engineering University, Harbin 150001, China
2
National Space Science Center, Chinese Academy of Sciences, Beijing 100190, China
3
Department of Electronics, Valahia University of Targoviste, 130082 Targoviste, Romania
*
Author to whom correspondence should be addressed.
Symmetry2017,9(10), 229;https://doi.org/10.3390/sym9100229
Submission received: 18 September 2017 /Revised: 1 October 2017 /Accepted: 6 October 2017 /Published: 15 October 2017

Abstract

:
A general zero attraction (GZA) proportionate normalized maximum correntropy criterion (GZA-PNMCC) algorithm is devised and presented on the basis of the proportionate-type adaptive filter techniques and zero attracting theory to highly improve the sparse system estimation behavior of the classical MCC algorithm within the framework of the sparse system identifications. The newly-developed GZA-PNMCC algorithm is carried out by introducing a parameter adjusting function into the cost function of the typical proportionate normalized maximum correntropy criterion (PNMCC) to create a zero attraction term. The developed optimization framework unifies the derivation of the zero attraction-based PNMCC algorithms. The developed GZA-PNMCC algorithm further exploits the impulsive response sparsity in comparison with the proportionate-type-based NMCC algorithm due to the GZA zero attraction. The superior performance of the GZA-PNMCC algorithm for estimating a sparse system in a non-Gaussian noise environment is proven by simulations.

    1. Introduction

    Adaptive filtering techniques have been extensively exploited, and they have experienced a challenging problem as the impulse response (IR) of the unknown systems is sparse [1,2,3]. As a result, a concept for developing sparse adaptive filters to handle such problem is becoming a hot topic for all researchers [4,5,6,7,8,9,10,11,12]. For sparse systems, most of their coefficients take the values of zero or near-zeros, while only a few coefficients have significant values [13,14,15,16]. Such sparse systems are commonly encountered in our in-nature world and plenty of real-world engineering applications such as the multi-path channels in wireless communications [17,18], underwater communications [19,20] and acoustic channels [21]. For example, for the signal transmissions in wireless communication in hilly-terrain environments, the channel response always exhibits a multi-path fading selecting phenomenon due to the delays of the different propagation paths [22]. Thus, the measurement result shows a sparse characteristic for such channels [23]. Furthermore, this phenomenon happens over wireless voice transmission systems and networks, in which the echo paths are demonstrated as active ones and some unknown bulk delays occur because of the propagation of the network, buffer delays and encoding processing [24]. Usually, these bulk delay intervals have little energy, which makes the impulse response sparse [25]. For dealing with this class of sparse impulse responses, traditional adaptive filters might converge slowly since all the system coefficients have the same step sizes, such as the normalized least mean square (NLMS) algorithm [26,27,28]. To cancel out this drawback, proportionate-type techniques have been presented to exploit the natural sparseness characteristics of these impulse responses [11,29,30].
    The proportionate NLMS (PNLMS) is a popular sparse signal processing algorithm for handling these sparse systems, which is realized by proportionately updating the filter coefficients in accordance with the magnitudes of these coefficients [11]. As a result, the PNLMS converges faster at its initial stage compared with the NLMS algorithm [29,30,31]. However, after the initial convergence, the PNLMS’s convergence is getting bad, even its convergence is worse than the NLMS algorithm. Several improved PNLMS algorithms have been reported to treat the medium sparsity systems, including the improved PNLMS (IPNLMS) and PNLMS++ algorithms [30,31]. However, these algorithms still cannot give the expected convergence achieved from the PNLMS at the beginning iteration stage for high sparseness impulse response applications. Recently, an information theoretic quantity has been presented to build the required cost function for adaptive signal processing applications [32,33,34,35,36,37,38]. To calculate the quantities, the entropy estimator was reported in detail. Then, a minimum error entropy (MEE) has been developed in the context of the adaptive filtering framework by using the entropy as a cost function to justify the signal structure well [32]. From the estimation performance of the MEE, we found that it was robust for processing the impulsive noise signals [39]. However, it has high computational complexity compared with the least mean square (LMS) algorithm, which is posing a problem for practical applications [39,40]. Moreover, the correntropy can be used as an efficient optimality criterion in robust and sparse adaptive filtering [37]. Recently, a maximum correntropy criterion (MCC) algorithm was reported based on a cost function with localized similarity [40]. Furthermore, the MCC has a similar robustness characteristics to the MEE algorithm, and it possesses a LMS-like complexity, making it suitable for practical engineering applications in impulsive noise environments. Additionally, Chen et al. proposed a novel similarity measure called generalized correntropy, which when used in robust adaptive filtering, can significantly outperform the existing methods [37]. However, these MCCs cannot utilize the sparseness characteristics of the unknown systems. Motivated by the zero attracting theory, zero attracting MCC (ZA-MCC), reweighted ZA-MCC (RZA-MCC), group-constrained MCC (GC-MCC) and reweighted GC-MCC (RGC-MCC) algorithms [41,42] have been reported for estimating sparse channels. However, these algorithms are realized based on the MCC algorithm to exploit the sparsity property of the multi-path channels, which might be affected by the input scalings.
    In this paper, a general zero attraction proportionate normalized MCC (GZA-PNMCC) algorithm is developed for the application in the sparse system identifications. The devised GZA-PNMCC algorithm is proposed based on the proportionate adaptive filtering scheme and the zero attracting techniques. In order to get the presented GZA-PNMCC, the following steps are considered. Firstly, the normalized MCC (NMCC) is proposed, followed by the introduction of the proportionate-type technique into the NMCC to construct the PNMCC algorithm. Secondly, a parameter adjustment function is integrated into the PNMCC algorithm to create the proposed GZA-PNMCC algorithm, and the zero attracting can be well adjusted for meeting various applications. The proposed GZA-PNMCC algorithm is analyzed, and the influence of different parameters and sparsity levels is investigated. The achieved results demonstrate that the GZA-PNMCC algorithm provides a faster convergence and better sparse system estimation performance in a mixed Gaussian noise environment. Different from the GC-MCC and RGC-MCC, the proposed GZA-PNMCC is exploited based on the PNMCC that is a proportionate-type NMCC algorithm. The GZA-PNMCC algorithm utilizes the parameter adjustment function to adjust the penalties on the channel coefficients to approximate various norms, which is also different with the GC and RGC techniques that can only use thel0-norm penalty on the large group and thel1-norm penalty on the small group. The proposed GZA-PNMCC can exert various penalties on the channel coefficients. Since the proposed GZA-PNMCC and the previous GC-MCC algorithms are presented based on different basic algorithms and different penalties are used in these algorithms, we believe that the proposed GZA-PNMCC algorithm can further exploit the inherent property of the multi-path channels.
    The paper consists of the following sections. The NMCC algorithm and its proportionate form, the PNMCC algorithm, are presented inSection 2. InSection 3, the GZA-PNMCC algorithm is mathematically derived within the framework of the sparse system identifications. The influence of key parameters and sparsity levels on the estimation performance of GZA-PNMCC is investigated inSection 4. Finally, conclusions are summarized inSection 5.

    2. Past Works on NMCC and PNMCC Algorithms

    2.1. NMCC Algorithm

    We herein consider a sparse unknown system whose impulse responses arewo(n)=[w0,w1,w2,,wN2,wN1]T. We use the NMCC algorithm to get an estimationw^(n) of the unknown systemwo(n) by using a training signalx(n)=[x(n),x(n1),x(n2),,x(nN2),x(nN1)]T, and the estimated error ise(n). Here,e(n) denotes the differences between the expected signald(n) and the estimator output signaly(n). Based on the adaptive filtering theory, we have:
    d(n)=xT(n)wo(n)+v(n),
    y(n)=xT(n)w^(n),
    and:
    e(n)=d(n)xT(n)w^(n),
    wherev(n) denotes as an impulsive noise. NMCC-based system identification tries to find out a solution of the following problem [40]:
    min12w^(n+1)w^(n)2,subjecttoe^(n)=1ξexpe2(n)2σ2e(n),
    wheree^(n)=d(n)xT(n)w^(n+1),·2 denotes the Euclidean vector norm andσ>0 is used to give a tradeoff effect between the convergence and the MSE,ξ=χx(n)2. By using the Lagrange multiplier method (LMM), we can get NMCC’s updating equation, which is [40]:
    w^(n+1)=w^(n)+χexpe2(n)2σ2x(n)2e(n)x(n).
    From the derivation of the NMCC algorithm, we find that the updating equation in (5) of the NMCC algorithm is similar to the NLMS algorithm. However, there is an exponential term in (5), which renders the NMCC algorithm more powerful for preventing impulsive-like noises.

    2.2. PNMCC Algorithm

    A gain assignment matrixG(n) similar to that of the known PNLMS algorithm is used to devise the PNMCC algorithm. By introducingG(n) into (5), the PNMCC’s updating equation is obtained [11,43]:
    w^(n+1)=w^(n)+χG(n)expe2(n)2σ2xT(n)G(n)x(n)+ϑe(n)x(n),
    whereϑ>0 denotes a small constant, which is to prevent the denominator from being divided by zero, and it can provide a stable solution. Similar to the PNLMS algorithm,G(n) denotes a diagonal matrix, which is used for modifying the step size of the NMCC algorithm under a certain rule. Generally speaking,G(n) is [11]:
    G(n)=diag(g0(n),g1(n),,gN1(n)),
    where the elementsgi(n) is described as:
    gi(n)=κi(n)i=0N1κi(n),0iN1
    with:
    κi(n)=max[γgmax[ρp,w^0(n),w^1(n),,w^N1(n)],w^i(n)],
    whereρp>0 andγg> usually have typical values ofρp=0.01 andγg=5/N.ρp aims to avoid the weight updating’s stalling at the initial iteration stage when all the system coefficients are set to be zero, andγg aims to preventw^i(n) from stalling when it is much smaller than the largest coefficient.

    3. Proposed GZA-PNMCC Algorithm

    The GZA-PNMCC algorithm is based on a parameter adjustment function constraint and the PNMCC algorithm to enhance the convergence characteristic of the PNMCC algorithm after the initial stage. To realize the GZA-PNMCC algorithm, a parameter adjustment function is incorporated into PNMCC’s cost function, where the parameter adjustment function is given as follows:
    Sβ(w^(n))=(1+β1)(1eβw^(n)),
    whereβ>0 represents a regulation parameter to model the desired general zero attraction. The behavior of the parameter adjustment function is well investigated with differentβ and is shown inFigure 1. It can be seen that the parameter adjustment function can be used for approximating anl0-norm when a largeβ is used. If we choose a very smallβ, the parameter adjustment function has a characteristic that is similar to thel1-norm. Therefore, a properβ should be selected to develop a flexible norm-like penalty. Based on the analysis of the performance of the parameter adjustment function, we can say that it can be used for implementingl1-norm-like,l0-norm-like, evenlp-norm-like penalties. Since the parameter adjustment function is flexible and useful for implementing the desired norms, it is used to derive the general zero attraction PNMCC algorithm. The proposed parameter adjustment function is integrated into the PNMCC to continue to use the sparseness for system identification. The GZA-PNMCC algorithm aims to solve the following problem:
    (w^(n+1)w^(n))TG1(n)(w^(n+1)w^(n))+γGZAG1(n)Sβ(w^(n+1))subjecttoe^(n)=1ξexpe2(n)2σ2e(n).
    In Equation (11),G1(n) represents the inverse form of the gain assignment matrixG(n).γGZA denotes a small positive regularization to give a tradeoff between the convergence and the sparseness development. It can be seen that the GZA-PNMCC algorithm utilizes aγGZAG1(n)Sβ(w^(n+1)) to create an expected zero attraction term, which makes it different from the PNMCC algorithm. Additionally,G1(n) is integrated into Equation (11).
    Then, we employ the LMM to find out the solution of Equation (11). As a result, we write the proposed GZA-PNMCC’S cost function as:
    JGZA(n+1)=(w^(n+1)w^(n))TG1(n)(w^(n+1)w^(n))+γGZAG1(n)Sβ(w^(n+1))+λe^(n)1ξexpe2(n)2σ2e(n).
    whereλ represents the Lagrange multiplier. The gradients ofJGZA(n+1) with respect tow^(n+1) andλ are obtained as follows:
    JGZA(n+1)w^(n+1)=0andJGZA(n+1)λ=0,
    and:
    w^(n+1)=w^(n)+λG(n)x(n)γGZASβ(w^(n+1)),
    whereSβ(w^(n+1)) is the derivative of the parameter adjustment function, which is:
    si,β(w^i(n+1))=(β+1)e(βw^i(n+1))sgn(wi(n+1)),
    and its vector form is:
    Sβ(w^(n+1))=(β+1)e(βw^(n+1))sgn(w^(n+1)).
    By left multiplying byxT(n) on both sides of Equation (14), we can obtain:
    xT(n)w^(n+1)=xT(n)w^(n)+λxT(n)G(n)x(n)γGZAxT(n)Sβ(w^(n+1)).
    From (13), we can get:
    e^n=1ξexpe2n2σ2en.
    Thus, Lagrange multiplierλ is:
    λ=ξexpe2(n)2σ2e(n)+γGZAxT(n)Sβ(w^(n+1))xT(n)G(n)x(n)
    By substituting (19) into (14), we can get:
    w^(n+1)=w^(n)+ξexpe2(n)2σ2e(n)+γGZAxT(n)Sβ(w^(n+1))xT(n)G(n)x(n)G(n)x(n+1)γGZASβ(w^(n))=w^(n)+ξe(n)expe2n2σ2G(n)x(n)xT(n)G(n)x(n)γGZAIG(n)x(n)xT(n)xT(n)G(n)x(n)Sβ(w^(n))=w^(n)+ξe(n)expe2n2σ2G(n)x(n)xT(n)G(n)x(n)γGZAIG(n)x(n)xT(n)xT(n)G(n)x(n)(β+1)e(βw^(n))sgn(w^(n)).
    From the updating Equation (20), we found that the elements inG(n)x(n)xT(n){xT(n)G(n)x(n)}1 were smaller than one; therefore, they can be ignored [29]. Thereby, we rewrite the updating equation of the developed GZA-PNMCC algorithm as:
    w^(n+1)=w^(n)+ξe(n)expe2n2σ2G(n)x(n)xT(n)G(n)x(n)γGZA(β+1)e(βw^(n))sgn(w^(n)).
    To better control the convergence, a step sizeμ is introduced into the updating Equation (21), which is similar to the PNMCC algorithm. Additionally, we also employ a small positive constant to avoid dividing by zero. As a result, the updated equation of the developed GZA-PNMCC algorithm is modified to be:
    w^(n+1)=w^(n)+χ1expe2n2σ2G(n)e(n)x(n)xT(n)G(n)x(n)+εGZAρGZA(β+1)e(βw^(n))sgn(w^(n)),
    whereχ1=ξμ, andρGZA=μγGZA is a regularization parameter that controls the ability of the zero attraction. The zero attraction termγGZA(β+1)e(βw^(n))sgn(w^(n)) in Equation (22) is named as an expected zero attractor. In the developed GZA-PNMCC algorithm, a properβ can be chosen to produce different penalties. The GZA zero attraction can give a high probability zero attraction on the zero coefficients or near-zero coefficients. What is more, a gain enhancement matrix is used for further assigning the large step-size to dominant coefficients. In fact, our developed GZA-PNMCC algorithm provides fast convergence for large coefficients by using the gain matrix scheme, while it speeds up the convergence for the small coefficients by using the proposed GZA zero attraction.

    4. Behavior of the Proposed GZA-PNMCC Algorithm

    We construct two examples to discuss the performance of the developed GZA-PNMCC algorithm within the framework of the sparse system identifications. We compare its performance with the MCC [39,40], NMCC [40], ZA-MCC [41], RZA-MCC [41] and PNLMS [11] algorithms. We use mean square deviation (MSD) to investigate the estimation behavior of the developed GZA-PNMCC algorithm. The definition of the MSD is:
    MSD(w^(n))=E[wo(n)w^(n)2].
    We find that the parameterχ1 has an important effect on the estimation behavior of the GZA-PNMCC algorithm. Therefore, we create an experiment to give an illustration of the performance of the GZA-PNMCC in the presence of the impulsive noise. We use(1θ)N(ι1,ν12)+θN(ι2,ν22)=(0,0.01,0,20,0.05) to model the desired impulsive noise, whereN(ιi,νi2)(i=1,2) are Gaussian distributions with means ofιi and variances ofνi2 [41].θ denotes a mixture parameter, which is to control the mixed performance of the two noises. In this paper, we assumex(n) is independent withv(n). Furthermore, we use a sparse system whose length isN=16. There is only one dominant coefficient, which is randomly distributed within the impulse response of the system. The parametersϑ=0.01 andσ=1000 are used to investigate the effects ofχ1, and the computer simulation results are shown inFigure 2. It is noted fromFigure 2 that the developed GZA-PNMCC algorithm has a fast convergence whenχ1 is large. For smallerχ1, the GZA-PNMCC algorithm converges slower in comparison with largerχ1. However, we can achieve lower estimation misalignment when we choose smallχ1.
    Furthermore,ρGZA plays an import role in the GZA-PNMCC algorithm since it controls the zero-attraction ability. We discuss the effects ofρGZA for estimation of a sparse system, which is the same as that in theχ1 investigation. The performance ofρGZA is given inFigure 3. It can be seen thatρGZA=9×105 provides the fastest convergence speed rate and the best system estimation behavior. Generally speaking, the estimation misalignment becomes smaller whenρGZA decreases from9×104 to9×105. If we continue to reduceρGZA, the estimation misalignment is changed in the opposite direction, meaning that the estimation misalignment becomes large whenρGZA decreases from5×105 to5×107. Therefore, we should carefully selectρGZA andχ1 in order to get a good performance and a balance between the convergence speed rate and estimation misalignment.
    Based on the key parameter investigations, we create an experiment to discuss the convergence characteristics of the GZA-PNMCC algorithm and compare its convergence with the PNLMS, PNMCC, NMCC, MCC, ZA-MCC and RZA-MCC algorithms. To compare the convergence at the same estimation misalignment level, we set all the simulation parameters asχMCC=0.0052,χNMCC=0.085,μZA=0.01,μRZA=0.015,ρZA=3×105,ρRZA=7×105,μPNLMS=0.072,χ=0.088,χ1=0.35,ρGZA=8×105, whereχMCC,χNMCC,μZA,μRZA,μPNLMS are the step sizes for the MCC, NMCC, ZA-MCC, RZA-MCC and PNLMS algorithms.ρZA andρRZA are the zero attraction parameters for ZA-MCC and RZA-MCC algorithms. The convergence of the GZA-PNMCC algorithm is described inFigure 4. Comparing with the previously presented sparse adaptive filtering ZA-MCC, RZA-MCC, PNMCC and PNLMS algorithms, our GZA-PNMCC algorithm achieves the fastest convergence speed rate because it integrates a flexible GZA zero attractor that can accelerate the convergence of zero or near-zero coefficients.
    Then, an experiment is set up to investigate the system estimation performance of the GZA-PNMCC algorithm in the context of the sparse system identifications with different effects of the sparsity levelsK, whereK denotes the number of the dominant coefficients in the impulse response of the unknown sparse system. Assuming that the IR of the unknown sparse system has a length ofN=16. To achieve the same convergence at the early stage, the simulation parameters are set asχMCC=0.03,χNMCC=0.4,μZA=μRZA=0.03,ρZA=8×105,ρRZA=2×104,μPNLMS=0.27,χ=0.24,χ1=0.3,ρGZA=9×105. The GZA-PNMCC’s estimation behavior is given inFigure 5, and its performance is compared with those of the MCC, NMCC, ZA-MCC, RZA-MCC and PNLMS algorithms. In this experiment, we chooseK=1,2,4,6 to discuss the system estimation behavior of the GZA-PNMCC. FromFigure 5a, we can see that the GZA-PNMCC algorithm has the fastest convergence and achieves the smallest estimation misalignment. The GZA-PNMCC algorithm provides much more gain than that of the PNMCC algorithm. Comparing to the previously reported ZA-MCC and RZA-MCC, our GZA-PNMCC algorithm has clear advantages with respect to both convergence speed rate and steady-state MSD. AsK increase from one to six, it is found that the estimation misalignment becomes larger because of the reduced sparsity. However, the developed GZA-PNMCC is still better than the MCC, NMCC, ZA-MCC, RZA-MCC and PNLMS algorithms in terms of the MSD.
    Finally, we set up an experiment to study the tracking behavior of our GZA-PNMCC algorithm for estimating a long-tap echo channel with two different sparsity levels and a length of 256. The sparsity measurement of the echo channel isζ12(wo)=NNN1wo1Nwo2 [44,45,46,47,48]. A typical echo channel is described inFigure 6. In this experiment, the sparsity isζ12(wo)=0.8222 for the first 8000 iterations, while it is changed to beζ12(wo)=0.7362 for the last 8000 iterations. Here,ζ12(wo) can choose different values to get different channels. Forζ12(wo)=0.8222, the channel is sparser than that ofζ12(wo)=0.7362. The parameters areχMCC=0.0055,χNMCC=1.3,μZA=μRZA=0.0055,ρZA=4×106,ρRZA=1×105,μPNLMS=1,χ=0.9,χ1=0.8,ρGZA=1×106. The behavior of the GZA-PNMCC algorithm for estimating an echo channel is shown inFigure 7. We note that our GZA-PNMCC algorithm provides the fastest convergence speed rate and smallest MSD forζ12(wo)=0.8222. Furthermore, it can be seen that there are two convergence stages, which are caused by the proportionate and the zero attraction schemes, respectively. Even whenζ12(wo)=0.7362, the GZA-PNMCC algorithm still outperforms the PNMCC, PNLMS, MCC, NMCC, ZA-MCC and RZA-MCC algorithms. The GZA-PNMCC algorithm is less affected by the sparseness of the unknown systemwo, which renders it effective and useful for sparse system identification. In addition, the proposed GZA-PNMCC algorithm has a moderate computational complexity, including (4N+5) additions, (7N+4) multiplications, (N) divisions and (N+1) exponential calculations.

    5. Conclusions

    A GZA-PNMCC algorithm with zero attraction has been developed for estimating sparse system identifications. The derivation and the analysis of the GZA-PNMCC algorithm is achieved by the LMM. The GZA-PNMCC algorithm uses a zero attraction scheme to improve the convergence at the subsequent stage by quickly forcing the zero system coefficients or near-zero system coefficients to zero. The simulation results demonstrated that the developed GZA-PNMCC algorithm is effective and useful for sparse system identifications because it provides superior performance to that of the PNLMS and PNMCC algorithms. Thereby, our developed GZA-PNMCC is suitable for sparse signal processing in practical engineering applications such as sparse channel estimation and echo cancellation.

    Acknowledgments

    This work was partially supported by the National Key Research and Development Program of the China-Government Corporation Special Program (2016YFE0111100), the Science and Technology innovative Talents Foundation of Harbin (2016RAXXJ044), the Projects for the Selected Returned Overseas Chinese Scholars of Heilongjiang Province and MOHRSSof China and the PhD Student Research and Innovation Fund of the Fundamental Research Funds for the Central Universities (HEUGIP201707).

    Author Contributions

    Yingsong Li wrote the draft of the paper and wrote the code. Yanyan Wang did the simulations of this paper. Felix Albu helped modify the paper and checked the grammar. Jingshan Jiang gave the analysis of the paper. All the authors wrote this paper together, and they have read and approved the final manuscript.

    Conflicts of Interest

    The authors declare no conflict of interest.

    References

    1. Khong, A.W.H.; Naylor, P.A. Efficient use of sparse adaptive filters. In Proceedings of the Fortieth Asilomar Conference on Signals, Systems and Computers (ACSSC ’06), Pacific Grove, CA, USA, 29 October–1 November 2006; pp. 1375–1379. [Google Scholar]
    2. Paleologu, C.; Benesty, J.; Ciochina, S.Sparse Adaptive Filters for Echo Cancellation; Morgan & Claypool: San Rafael, CA, USA, 2010. [Google Scholar]
    3. Rodger, J.A. Toward reducing failure risk in an integrated vehicle health maintenance system: A fuzzy multi-sensor data fusion Kalman filter approach for IVHMS.Expert Syst. Appl.2012,139, 9821–9836. [Google Scholar] [CrossRef]
    4. Murakami, Y.; Yamagishi, M.; Yukawa, M.; Yamada, I. A sparse adaptive filtering using time-varying soft-thresholding techniques. In Proceedings of the 2010 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), Dallas, TX, USA, 14–19 March 2010; pp. 3734–3737. [Google Scholar]
    5. Li, Y.; Hamamura, M. Zero-attracting variable-step-size least mean square algorithms for adaptive sparse channel estimation.Int. J. Adapt. Control Signal Process.2015,29, 1189–1206. [Google Scholar] [CrossRef]
    6. Chen, Y.; Gu, Y.; Hero, A.O., III. Sparse LMS for system identification. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP ’09), Taipei, Taiwan, 19–24 April 2009; pp. 3125–3128. [Google Scholar]
    7. Wang, Y.; Li, Y.; Yang, R. Sparse adaptive channel estimation based on mixed controlledl2 andlp-norm error criterion.J. Frankl. Inst.2017,354, 7215–7239. [Google Scholar] [CrossRef]
    8. Li, Y.; Wang, Y.; Jiang, T. Sparse least mean mixed-norm adaptive filtering algorithms for sparse channel estimation applications.Int. J. Commun. Syst.2017,30, 1–16. [Google Scholar] [CrossRef]
    9. Wang, Y.; Li, Y. Sparse multi-path channel estimation using norm combination constrained set-membership NLMS algorithms.Wirel. Commun. Mob. Comput.2017,2017, 8140702. [Google Scholar] [CrossRef]
    10. Li, Y.; Jin, Z.; Wang, Y. Adaptive channel estimation based on an improved norm constrained set-membership normalized least mean square algorithm.Wirel. Commun. Mob. Comput.2017,2017, 8056126. [Google Scholar]
    11. Duttweiler, D.L. Proportionate normalized least-mean-squares adaptation in echo cancelers.IEEE Trans. Speech Audio Process.2000,8, 508–518. [Google Scholar] [CrossRef]
    12. Naylor, P.A.; Cui, J.; Brookes, M. Adaptive algorithms for sparse echo cancellation.Signal Process.2009,86, 1182–1192. [Google Scholar] [CrossRef]
    13. Cotter, S.F.; Rao, B.D. Sparse channel estimation via matching pursuit with application to equalization.IEEE Trans. Commun.2002,50, 374–377. [Google Scholar] [CrossRef]
    14. Gui, G.; Peng, W.; Adachi, F. Improved adaptive sparse channel estimation based on the least mean square algorithm. In Proceedings of the 2013 IEEE Wireless Communications and Networking Conference (WCNC), Shanghai, China, 7–10 April 2013; pp. 3105–3109. [Google Scholar]
    15. Arenas, J.; Figueiras-Vidal, A.R. Adaptive combination of proportionate filters for sparse echo cancellation.IEEE Trans. Audio Speech Lang. Process.2009,17, 1087–1098. [Google Scholar] [CrossRef]
    16. Nekuii, M.; Atarodi, M. A fast converging algorithm for network echo cancellation.IEEE Signal Process. Lett.2004,11, 427–430. [Google Scholar] [CrossRef]
    17. Li, Y.; Wang, Y.; Jiang, T. Norm-adaption penalized least mean square/fourth algorithm for sparse channel estimation.Signal Process.2016,128, 243–251. [Google Scholar] [CrossRef]
    18. Li, Y.; Zhang, C.; Wang, S. Low-complexity non-uniform penalized affine projection algorithm for sparse system identification.Circuits Syst. Signal Process.2016,35, 1611–1624. [Google Scholar] [CrossRef]
    19. Stojanovic, M.; Freitag, L.; Johnson, M. Channel-estimation-based adaptive equalization of underwater acoustic signals. In Proceedings of the OCEANS ’99 MTS/IEEE, Riding the Crest into the 21st Century, Seattle, WA, USA, 13–16 September 1999; pp. 985–990. [Google Scholar]
    20. Pelekanakis, K.; Chitre, M. Comparison of sparse adaptive filters for underwater acoustic channel equalization/estimation. In Proceedings of the 2010 IEEE International Conference on Communication Systems (ICCS), Singapore, 17–19 November 2010; pp. 395–399. [Google Scholar]
    21. Li, Y.; Wang, Y.; Jiang, T. Sparse-aware set-membership NLMS algorithms and their application for sparse channel estimation and echo cancelation.AEU Int. J. Electron. Commun.2016,70, 895–902. [Google Scholar] [CrossRef]
    22. Gui, G.; Mehbodniya, A.; Adachi, F. Least mean square/fourth algorithm for adaptive sparse channel estimation. In Proceedings of the 2013 IEEE 24th International Symposium on Personal Indoor and Mobile Radio Communications (PIMRC), London, UK, 8–11 September 2013; pp. 296–300. [Google Scholar]
    23. Vuokko, L.; Kolmonen, V.M.; Salo, J.; Vainikainen, P. Measurement of large-scale cluster power characteristics for geometric channel models.IEEE Trans. Antennas Propagat.2007,55, 3361–3365. [Google Scholar] [CrossRef]
    24. Radecki, J.; Zilic, Z.; Radecka, K. Echo cancellation in IP networks. In Proceedings of the 45th Midwest Symposium on Circuits and Systems, Tulsa, OK, USA, 4–7 August 2002; pp. 219–222. [Google Scholar]
    25. Cui, J.; Naylor, P.A.; Brown, D.T. An improved IPNLMS algorithm for echo cancellation in packet-switched networks. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP ’04), Montreal, QC, Canada, 17–21 May 2004. [Google Scholar]
    26. Widrow, B.; Stearns, S.D.Adaptive Signal Processing; Prentice Hall: Upper Saddle River, NJ, USA, 1985. [Google Scholar]
    27. Wang, Y.; Li, Y. Norm penalized joint-optimization NLMS algorithms for broadband sparse adaptive channel estimation.Symmetry2017,9, 133. [Google Scholar] [CrossRef]
    28. Haykin, S.Adaptive Filter Theory; Prentice Hall: Upper Saddle River, NJ, USA, 1991. [Google Scholar]
    29. Li, Y.; Hamamura, M. An improved proportionate normalized least-mean-square algorithm for broadband multipath channel estimation.Sci. World J.2014,2014, 572969. [Google Scholar] [CrossRef] [PubMed]
    30. Benesty, J.; Gay, S.L. An improved PNLMS algorithm. In Proceedings of the 2002 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Orlando, FL, USA, 13–17 May 2002; Volume II, pp. 1881–1884. [Google Scholar]
    31. Gay, S.L. An efficient, fast converging adaptive filter for network echo cancellation. In Proceedings of the 32nd Asilomar Conference on Signals and System for Computing, Pacific Grove, CA, USA, 1–4 November 1998; Volume 1, pp. 394–398. [Google Scholar]
    32. Liu, W.F.; Pokharel, P.P.; Principe, J.C. Correntropy: Properties and applications in non-Gaussian signal processing.IEEE Trans. Signal Process.2007,55, 5286–5298. [Google Scholar] [CrossRef]
    33. Li, Y.; Wang, Y. Sparse SM-NLMS algorithm based on correntropy criterion.Electron. Lett.2016,52, 1461–1463. [Google Scholar] [CrossRef]
    34. Chen, B.; Principe, J.C. Maximum correntropy estimation is a smoothed MAP estimation.IEEE Signal Process. Lett.2012,19, 491–494. [Google Scholar] [CrossRef]
    35. Chen, B.; Xing, L.; Liang, J.; Zheng, N.; Principe, J.C. Steady-state mean-square error analysis for adaptive filtering under the maximum correntropy criterion.IEEE Signal Process. Lett.2014,21, 880–884. [Google Scholar]
    36. Zhao, S.; Chen, B.; Principe, J.C. Kernel adaptive filtering with maximum correntropy criterion. In Proceedings of the 2011 International Joint Conference on Neural Networks (IJCNN), San Jose, CA, USA, 31 July–5 August 2011; pp. 2012–2017. [Google Scholar]
    37. Chen, B.; Xing, L.; Zhao, H.; Zheng, N.; Principe, J.C. Generalized correntropy for robust adaptive filtering.IEEE Trans. Signal Process.2016,64, 3376–3387. [Google Scholar] [CrossRef]
    38. Chen, B.; Wang, J.; Zhao, H.; Zheng, N.; Principe, J.C. Convergence of a fixed-point algorithm under maximum correntropy criterion.IEEE Signal Process. Lett.2015,22, 1723–1727. [Google Scholar] [CrossRef]
    39. Singh, A.; Principe, J.C. Using correntropy as a cost function in linear adaptive filters. In Proceedings of the International Joint Conference on Neural Networks, Atlanta, GA, USA, 14–19 June 2009; pp. 2950–2955. [Google Scholar]
    40. Hadded, D.B.; Petraglia, M.R.; Petraglia, A. A unified approach for sparsity-aware and maximum correntropy adaptive filters. In Proceedings of the 24th European Signal Processing Conference (EUSIPCO), Budapest, Hungary, 29 August–2 September 2016; pp. 170–174. [Google Scholar]
    41. Ma, W.; Qu, H.; Gui, G.; Xu, L.; Zhao, J.; Chen, B. Maximum correntropy criterion based sparse adaptive filtering algorithms for robust channel estimation under non-Gaussian environments.J. Frankl. Inst.2015,352, 2708–2727. [Google Scholar] [CrossRef]
    42. Wang, Y.; Li, Y.; Albu, F.; Yang, R. Group-constrained maximum correntropy criterion algorithms for estimating sparse mix-noised channels.Entropy2017,19, 432. [Google Scholar] [CrossRef]
    43. Wu, Z.; Peng, S.; Chen, B.; Zhao, H.; Principe, J.C. Proportionate minimum error entropy algorithm for sparse system identification.Entropy2015,17, 5995–6006. [Google Scholar] [CrossRef]
    44. Salman, M.S. Sparse leaky-LMS algorithm for system identification and its convergence analysis.Int. J. Adapt. Control Signal Process.2014,28, 1065–1072. [Google Scholar] [CrossRef]
    45. Li, Y.; Wang, Y.; Yang, R.; Albu, F. A soft parameter function penalized normalized maximum correntropy criterion algorithm for sparse system identification.Entropy2017,19, 45. [Google Scholar] [CrossRef]
    46. Wang, Y.; Li, Y.; Yang, R. A sparsity-aware proportionate normalized maximum correntropy criterion algorithm for sparse system identification in non-gaussian environment. In Proceedings of the 25th European Signal Processing Conference (EUSIPCO), Kos Island, Greece, 28 August–2 September 2017; pp. 246–250. [Google Scholar]
    47. Hoyer, P.O. Non-negative matrix factorization with sparseness constraints.J. Mach. Learn. Res.2001,49, 1208–1215. [Google Scholar]
    48. Huang, Y.; Benesty, J.; Chen, J.Acoustic MIMO Signal Processing; Springer: Berlin, Germany, 2006. [Google Scholar]
    Symmetry 09 00229 g001 550
    Figure 1. Behavior of the parameter adjustment function with variousβ.
    Figure 1. Behavior of the parameter adjustment function with variousβ.
    Symmetry 09 00229 g001
    Symmetry 09 00229 g002 550
    Figure 2. Effects ofχ1 on the estimation behavior.
    Figure 2. Effects ofχ1 on the estimation behavior.
    Symmetry 09 00229 g002
    Symmetry 09 00229 g003 550
    Figure 3. Effects ofρGZA on the estimation behavior.
    Figure 3. Effects ofρGZA on the estimation behavior.
    Symmetry 09 00229 g003
    Symmetry 09 00229 g004 550
    Figure 4. Convergence of the proposed general zero attraction proportionate normalized maximum correntropy criterion (GZA-PNMCC) algorithm.
    Figure 4. Convergence of the proposed general zero attraction proportionate normalized maximum correntropy criterion (GZA-PNMCC) algorithm.
    Symmetry 09 00229 g004
    Symmetry 09 00229 g005 550
    Figure 5. Estimation behaviors of the GZA-PNMCC algorithm with different sparsity levelK. (a)K = 1; (b)K = 2; (c)K = 4; (d)K = 6.
    Figure 5. Estimation behaviors of the GZA-PNMCC algorithm with different sparsity levelK. (a)K = 1; (b)K = 2; (c)K = 4; (d)K = 6.
    Symmetry 09 00229 g005
    Symmetry 09 00229 g006 550
    Figure 6. A typical echo channel with a length of 256.
    Figure 6. A typical echo channel with a length of 256.
    Symmetry 09 00229 g006
    Symmetry 09 00229 g007 550
    Figure 7. Tracking behavior of our GZA-PNMCC algorithm.
    Figure 7. Tracking behavior of our GZA-PNMCC algorithm.
    Symmetry 09 00229 g007

    © 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

    Share and Cite

    MDPI and ACS Style

    Li, Y.; Wang, Y.; Albu, F.; Jiang, J. A General Zero Attraction Proportionate Normalized Maximum Correntropy Criterion Algorithm for Sparse System Identification.Symmetry2017,9, 229. https://doi.org/10.3390/sym9100229

    AMA Style

    Li Y, Wang Y, Albu F, Jiang J. A General Zero Attraction Proportionate Normalized Maximum Correntropy Criterion Algorithm for Sparse System Identification.Symmetry. 2017; 9(10):229. https://doi.org/10.3390/sym9100229

    Chicago/Turabian Style

    Li, Yingsong, Yanyan Wang, Felix Albu, and Jingshan Jiang. 2017. "A General Zero Attraction Proportionate Normalized Maximum Correntropy Criterion Algorithm for Sparse System Identification"Symmetry 9, no. 10: 229. https://doi.org/10.3390/sym9100229

    APA Style

    Li, Y., Wang, Y., Albu, F., & Jiang, J. (2017). A General Zero Attraction Proportionate Normalized Maximum Correntropy Criterion Algorithm for Sparse System Identification.Symmetry,9(10), 229. https://doi.org/10.3390/sym9100229

    Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further detailshere.

    Article Metrics

    No
    No

    Article Access Statistics

    For more information on the journal statistics, clickhere.
    Multiple requests from the same IP address are counted as one view.
    Symmetry, EISSN 2073-8994, Published by MDPI
    RSSContent Alert

    Further Information

    Article Processing Charges Pay an Invoice Open Access Policy Contact MDPI Jobs at MDPI

    Guidelines

    For Authors For Reviewers For Editors For Librarians For Publishers For Societies For Conference Organizers

    MDPI Initiatives

    Sciforum MDPI Books Preprints.org Scilit SciProfiles Encyclopedia JAMS Proceedings Series

    Follow MDPI

    LinkedIn Facebook X
    MDPI

    Subscribe to receive issue release notifications and newsletters from MDPI journals

    © 1996-2025 MDPI (Basel, Switzerland) unless otherwise stated
    Terms and Conditions Privacy Policy
    We use cookies on our website to ensure you get the best experience.
    Read more about our cookieshere.
    Accept
    Back to TopTop
    [8]ページ先頭

    ©2009-2025 Movatter.jp