Least Mean Square

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 41091 Experts worldwide ranked by ideXlab platform

R Unbehauen - One of the best experts on this subject based on the ideXlab platform.

  • bias remedy Least Mean Square equation error algorithm for iir parameter recursive estimation
    IEEE Transactions on Signal Processing, 1992
    Co-Authors: J N Lin, R Unbehauen
    Abstract:

    In the area of infinite impulse response (IIR) system identification and adaptive filtering the equation error algorithms used for recursive estimation of the plant parameters are well known for their good convergence properties. However, these algorithms give biased parameter estimates in the presence of measurement noise. A new algorithm is proposed on the basis of the Least Mean Square equation error (LMSEE) algorithm, which manages to remedy the bias while retaining the parameter stability. The so-called bias-remedy Least Mean Square equation error (BRLE) algorithm has a simple form. The compatibility of the concept of bias remedy with the stability requirement for the convergence procedure is supported by a practically Meaningful theorem. The behavior of the BRLE has been examined extensively in a series of computer simulations. >

  • Modification of the Least-Mean-Square equation error algorithm for IIR system identification and adaptive filtering
    IEEE International Symposium on Circuits and Systems, 1
    Co-Authors: J N Lin, R Unbehauen
    Abstract:

    An algorithm is proposed for infinite impulse response (IIR) system identification and signal adaptive filtering which is derived from the Least-Mean-Square equation error algorithm, but it manages to remedy its bias without a loss of stability. The so-called bias-remedy Least-Mean-Square equation error algorithm has a simple form, and it shows satisfactory behavior in the series of computer simulations. The algorithm's stability is discussed. >

J N Lin - One of the best experts on this subject based on the ideXlab platform.

  • bias remedy Least Mean Square equation error algorithm for iir parameter recursive estimation
    IEEE Transactions on Signal Processing, 1992
    Co-Authors: J N Lin, R Unbehauen
    Abstract:

    In the area of infinite impulse response (IIR) system identification and adaptive filtering the equation error algorithms used for recursive estimation of the plant parameters are well known for their good convergence properties. However, these algorithms give biased parameter estimates in the presence of measurement noise. A new algorithm is proposed on the basis of the Least Mean Square equation error (LMSEE) algorithm, which manages to remedy the bias while retaining the parameter stability. The so-called bias-remedy Least Mean Square equation error (BRLE) algorithm has a simple form. The compatibility of the concept of bias remedy with the stability requirement for the convergence procedure is supported by a practically Meaningful theorem. The behavior of the BRLE has been examined extensively in a series of computer simulations. >

  • Modification of the Least-Mean-Square equation error algorithm for IIR system identification and adaptive filtering
    IEEE International Symposium on Circuits and Systems, 1
    Co-Authors: J N Lin, R Unbehauen
    Abstract:

    An algorithm is proposed for infinite impulse response (IIR) system identification and signal adaptive filtering which is derived from the Least-Mean-Square equation error algorithm, but it manages to remedy its bias without a loss of stability. The so-called bias-remedy Least-Mean-Square equation error algorithm has a simple form, and it shows satisfactory behavior in the series of computer simulations. The algorithm's stability is discussed. >

Jose C. M. Bermudez - One of the best experts on this subject based on the ideXlab platform.

  • reweighted nonnegative Least Mean Square algorithm
    Signal Processing, 2016
    Co-Authors: Jie Chen, Cedric Richard, Jose C. M. Bermudez
    Abstract:

    Statistical inference subject to nonnegativity constraints is a frequently occurring problem in learning problems. The nonnegative Least-Mean-Square (NNLMS) algorithm was derived to address such problems in an online way. This algorithm builds on a fixed-point iteration strategy driven by the Karush-Kuhn-Tucker conditions. It was shown to provide low variance estimates, but it however suffers from unbalanced convergence rates of these estimates. In this paper, we address this problem by introducing a variant of the NNLMS algorithm. We provide a theoretical analysis of its behavior in terms of transient learning curve, steady-state and tracking performance. We also introduce an extension of the algorithm for online sparse system identification. Monte-Carlo simulations are conducted to illustrate the performance of the algorithm and to validate the theoretical results. HighlightsWe proposed a variant of NN-LMS algorithm with balanced weight convergence rates.Accurate performance analysis is performed for a general nonstationarity model.The sparse system identification problem can be solved via the derived algorithm.

  • Non-negative Least-Mean-Square algorithm
    2011
    Co-Authors: Jie Chen, Cedric Richard, Jose C. M. Bermudez, Paul Honeine
    Abstract:

    Dynamic system modeling plays a crucial role in the development of techniques for stationary and nonstationary signal processing. Due to the inherent physical characteristics of systems under investigation, nonnegativity is a desired constraint that can usually be imposed on the parameters to estimate. In this paper, we propose a general method for system identification under nonnegativity constraints. We derive the so-called nonnegative Least-Mean-Square algorithm (NNLMS) based on stochastic gradient descent, and we analyze its convergence. Experiments are conducted to illustrate the performance of this approach and consistency with the analysis.

  • Nonnegative Least-Mean-Square Algorithm
    IEEE Transactions on Signal Processing, 2011
    Co-Authors: Jie Chen, Cedric Richard, Jose C. M. Bermudez, Paul Honeine
    Abstract:

    Dynamic system modeling plays a crucial role in the development of techniques for stationary and nonstationary signal processing. Due to the inherent physical characteristics of systems under investigation, nonnegativity is a desired constraint that can usually be imposed on the parameters to estimate. In this paper, we propose a general method for system identification under nonnegativity constraints. We derive the so-called nonnegative Least-Mean-Square algorithm (NNLMS) based on stochastic gradient descent, and we analyze its convergence. Experiments are conducted to illustrate the performance of this approach and consistency with the analysis.

Jie Chen - One of the best experts on this subject based on the ideXlab platform.

  • reweighted nonnegative Least Mean Square algorithm
    Signal Processing, 2016
    Co-Authors: Jie Chen, Cedric Richard, Jose C. M. Bermudez
    Abstract:

    Statistical inference subject to nonnegativity constraints is a frequently occurring problem in learning problems. The nonnegative Least-Mean-Square (NNLMS) algorithm was derived to address such problems in an online way. This algorithm builds on a fixed-point iteration strategy driven by the Karush-Kuhn-Tucker conditions. It was shown to provide low variance estimates, but it however suffers from unbalanced convergence rates of these estimates. In this paper, we address this problem by introducing a variant of the NNLMS algorithm. We provide a theoretical analysis of its behavior in terms of transient learning curve, steady-state and tracking performance. We also introduce an extension of the algorithm for online sparse system identification. Monte-Carlo simulations are conducted to illustrate the performance of the algorithm and to validate the theoretical results. HighlightsWe proposed a variant of NN-LMS algorithm with balanced weight convergence rates.Accurate performance analysis is performed for a general nonstationarity model.The sparse system identification problem can be solved via the derived algorithm.

  • Non-negative Least-Mean-Square algorithm
    2011
    Co-Authors: Jie Chen, Cedric Richard, Jose C. M. Bermudez, Paul Honeine
    Abstract:

    Dynamic system modeling plays a crucial role in the development of techniques for stationary and nonstationary signal processing. Due to the inherent physical characteristics of systems under investigation, nonnegativity is a desired constraint that can usually be imposed on the parameters to estimate. In this paper, we propose a general method for system identification under nonnegativity constraints. We derive the so-called nonnegative Least-Mean-Square algorithm (NNLMS) based on stochastic gradient descent, and we analyze its convergence. Experiments are conducted to illustrate the performance of this approach and consistency with the analysis.

  • Nonnegative Least-Mean-Square Algorithm
    IEEE Transactions on Signal Processing, 2011
    Co-Authors: Jie Chen, Cedric Richard, Jose C. M. Bermudez, Paul Honeine
    Abstract:

    Dynamic system modeling plays a crucial role in the development of techniques for stationary and nonstationary signal processing. Due to the inherent physical characteristics of systems under investigation, nonnegativity is a desired constraint that can usually be imposed on the parameters to estimate. In this paper, we propose a general method for system identification under nonnegativity constraints. We derive the so-called nonnegative Least-Mean-Square algorithm (NNLMS) based on stochastic gradient descent, and we analyze its convergence. Experiments are conducted to illustrate the performance of this approach and consistency with the analysis.

Jose C Principe - One of the best experts on this subject based on the ideXlab platform.

  • IJCNN - Density-dependent quantized kernel Least Mean Square
    2016 International Joint Conference on Neural Networks (IJCNN), 2016
    Co-Authors: Lei Sun, Badong Chen, Nanning Zheng, Jianji Wang, Jose C Principe
    Abstract:

    Kernel Least Mean Square is a simple and effective adaptive algorithm, but dragged by its unlimited growing network size. Many schemes have been proposed to reduce the network size, but few takes the distribution of the input data into account. Input data distribution is generally important in view of both model sparsification and generalization performance promotion. In this paper, we introduce an online density-dependent vector quantization scheme, which adopts a shrinkage threshold to adapt its output to the input data distribution. This scheme is then incorporated into the quantized kernel Least Mean Square (QKLMS) to develop a density-dependent QKLMS (DQKLMS). Experiments on static function estimation and short-term chaotic time series prediction are presented to demonstrate the desirable performance of DQKLMS.

  • IJCNN - On initial convergence behavior of the kernel Least Mean Square algorithm
    2015 International Joint Conference on Neural Networks (IJCNN), 2015
    Co-Authors: Badong Chen, Nanning Zheng, Ren Wang, Jose C Principe
    Abstract:

    The Mean Square convergence of the kernel Least Mean Square (KLMS) algorithm has been studied in a recent paper [B. Chen, S. Zhao, P. Zhu, J. C. Principe, Mean Square convergence analysis of the kernel Least Mean Square algorithm, Signal Processing, vol. 92, pp. 2624–2632, 2012]. In this paper, we continue this study and focus mainly on the initial convergence behavior. Two measures of the convergence performance are considered, namely the weight error power (WEP) and excess Mean Square error (EMSE). The analytical expressions of the initial decreases of the WEP and EMSE are derived, and several interesting facts about the initial convergence are presented. An illustration example is given to support our observation.

  • fixed budget quantized kernel Least Mean Square algorithm
    Signal Processing, 2013
    Co-Authors: Songlin Zhao, Badong Chen, Pingping Zhu, Jose C Principe
    Abstract:

    This paper presents a quantized kernel Least Mean Square algorithm with a fixed memory budget, named QKLMS-FB. In order to deal with the growing support inherent in online kernel methods, the proposed algorithm utilizes a pruning criterion, called significance measure, based on a weighted contribution of the existing data centers. The basic idea of the proposed methodology is to discard the center with the smallest influence on the whole system, when a new sample is included in the dictionary. The significance measure can be updated recursively at each step which is suitable for online operation. Furthermore, the proposed methodology does not need any a priori knowledge about the data and its computational complexity is linear with the center number. Experiments show that the proposed algorithm successfully prunes the Least ''significant'' centers and preserves the important ones, resulting in a compact KLMS model with little loss in accuracy.

  • mixture kernel Least Mean Square
    International Joint Conference on Neural Network, 2013
    Co-Authors: Rosha Pokharel, Sohan Seth, Jose C Principe
    Abstract:

    Instead of using single kernel, different approaches of using multiple kernels have been proposed recently in kernel learning literature, one of which is multiple kernel learning (MKL). In this paper, we propose an alternative to MKL in order to select the appropriate kernel given a pool of predefined kernels, for a family of online kernel filters called kernel adaptive filters (KAF). The need for an alternative is that, in a sequential learning method where the hypothesis is updated at every incoming sample, MKL would provide a new kernel, and thus a new hypothesis in the new reproducing kernel Hilbert space (RKHS) associated with the kernel. This does not fit well in the KAF framework, as learning a hypothesis in a fixed RKHS is the core of the KAF algorithms. Hence, we introduce an adaptive learning method to address the kernel selection problem for the KAF, based on competitive mixture of models. We propose mixture kernel Least Mean Square (MxKLMS) adaptive filtering algorithm, where the kernel Least Mean Square (KLMS) filters learned with different kernels, act in parallel at each input instance and are competitively combined such that the filter with the best kernel is an expert for each input regime. The competition among these experts is created by using a performance based gating, that chooses the appropriate expert locally. Therefore, the individual filter parameters as well as the weights for combination of these filters are learned simultaneously in an online fashion. The results obtained suggest that the model not only selects the best kernel, but also significantly improves the prediction accuracy.

  • IJCNN - Mixture kernel Least Mean Square
    The 2013 International Joint Conference on Neural Networks (IJCNN), 2013
    Co-Authors: Rosha Pokharel, Sohan Seth, Jose C Principe
    Abstract:

    Instead of using single kernel, different approaches of using multiple kernels have been proposed recently in kernel learning literature, one of which is multiple kernel learning (MKL). In this paper, we propose an alternative to MKL in order to select the appropriate kernel given a pool of predefined kernels, for a family of online kernel filters called kernel adaptive filters (KAF). The need for an alternative is that, in a sequential learning method where the hypothesis is updated at every incoming sample, MKL would provide a new kernel, and thus a new hypothesis in the new reproducing kernel Hilbert space (RKHS) associated with the kernel. This does not fit well in the KAF framework, as learning a hypothesis in a fixed RKHS is the core of the KAF algorithms. Hence, we introduce an adaptive learning method to address the kernel selection problem for the KAF, based on competitive mixture of models. We propose mixture kernel Least Mean Square (MxKLMS) adaptive filtering algorithm, where the kernel Least Mean Square (KLMS) filters learned with different kernels, act in parallel at each input instance and are competitively combined such that the filter with the best kernel is an expert for each input regime. The competition among these experts is created by using a performance based gating, that chooses the appropriate expert locally. Therefore, the individual filter parameters as well as the weights for combination of these filters are learned simultaneously in an online fashion. The results obtained suggest that the model not only selects the best kernel, but also significantly improves the prediction accuracy.