Norm Constraint

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 8985 Experts worldwide ranked by ideXlab platform

Zhisong Wang - One of the best experts on this subject based on the ideXlab platform.

  • doubly constrained robust capon beamformer
    IEEE Transactions on Signal Processing, 2004
    Co-Authors: Petre Stoica, Zhisong Wang
    Abstract:

    The standard Capon beamformer (SCB) is known to have better resolution and much better interference rejection capability than the standard data-independent beamformer when the array steering vector is accurately known. However, the major problem of the SCB is that it lacks robustness in the presence of array steering vector errors. In this paper, we will first provide a complete analysis of a Norm constrained Capon beamforming (NCCB) approach, which uses a Norm Constraint on the weight vector to improve the robustness against array steering vector errors and noise. Our analysis of NCCB is thorough and sheds more light on the choice of the Norm Constraint than what was commonly known. We also provide a natural extension of the SCB, which has been obtained via covariance matrix fitting, to the case of uncertain steering vectors by enforcing a double Constraint on the array steering vector, viz. a constant Norm Constraint and a spherical uncertainty set Constraint, which we refer to as the doubly constrained robust Capon beamformer (DCRCB). NCCB and DCRCB can both be efficiently computed at a comparable cost with that of the SCB. Performance comparisons of NCCB, DCRCB, and several other adaptive beamformers via a number of numerical examples are also presented.

  • doubly constrained robust capon beamformer
    Asilomar Conference on Signals Systems and Computers, 2003
    Co-Authors: Petre Stoica, Zhisong Wang
    Abstract:

    The standard Capon beamformer (SCB) is known to have better resolution and much better interference rejection capability than the standard data-independent beamformer when the array steering vector is accurately known. However, the major problem of SCB is that it lacks robustness in the presence of array steering vector errors. In this paper, we provide a natural extension of SCB, obtained via covariance matrix fitting, to the case of uncertain steering vectors by enforcing a double Constraint on the array steering vector, viz. a constant Norm Constraint and a spherical uncertainty set Constraint, which we refer to as the doubly constrained robust Capon beamformer (DCRCB). DCRCB can be efficiently computed at a comparable cost with that of SCB. Performance comparisons of DCRCB and our previously proposed robust Capon beamformer (RCB) are also presented via a number of numerical examples.

Petre Stoica - One of the best experts on this subject based on the ideXlab platform.

  • doubly constrained robust capon beamformer
    IEEE Transactions on Signal Processing, 2004
    Co-Authors: Petre Stoica, Zhisong Wang
    Abstract:

    The standard Capon beamformer (SCB) is known to have better resolution and much better interference rejection capability than the standard data-independent beamformer when the array steering vector is accurately known. However, the major problem of the SCB is that it lacks robustness in the presence of array steering vector errors. In this paper, we will first provide a complete analysis of a Norm constrained Capon beamforming (NCCB) approach, which uses a Norm Constraint on the weight vector to improve the robustness against array steering vector errors and noise. Our analysis of NCCB is thorough and sheds more light on the choice of the Norm Constraint than what was commonly known. We also provide a natural extension of the SCB, which has been obtained via covariance matrix fitting, to the case of uncertain steering vectors by enforcing a double Constraint on the array steering vector, viz. a constant Norm Constraint and a spherical uncertainty set Constraint, which we refer to as the doubly constrained robust Capon beamformer (DCRCB). NCCB and DCRCB can both be efficiently computed at a comparable cost with that of the SCB. Performance comparisons of NCCB, DCRCB, and several other adaptive beamformers via a number of numerical examples are also presented.

  • doubly constrained robust capon beamformer
    Asilomar Conference on Signals Systems and Computers, 2003
    Co-Authors: Petre Stoica, Zhisong Wang
    Abstract:

    The standard Capon beamformer (SCB) is known to have better resolution and much better interference rejection capability than the standard data-independent beamformer when the array steering vector is accurately known. However, the major problem of SCB is that it lacks robustness in the presence of array steering vector errors. In this paper, we provide a natural extension of SCB, obtained via covariance matrix fitting, to the case of uncertain steering vectors by enforcing a double Constraint on the array steering vector, viz. a constant Norm Constraint and a spherical uncertainty set Constraint, which we refer to as the doubly constrained robust Capon beamformer (DCRCB). DCRCB can be efficiently computed at a comparable cost with that of SCB. Performance comparisons of DCRCB and our previously proposed robust Capon beamformer (RCB) are also presented via a number of numerical examples.

Haiquan Zhao - One of the best experts on this subject based on the ideXlab platform.

  • block sparse non uniform Norm Constraint Normalised subband adaptive filter
    Iet Signal Processing, 2019
    Co-Authors: Wenyuan Wang, Haiquan Zhao
    Abstract:

    This study proposes a block-sparse non-uniform Norm Constraint Normalised subband adaptive filter (BS-NNCNSAF) for the block-sparse system identification problem, which is obtained by minimising a novel cost function involving the non-uniform mixed l 2, p Norm like a Constraint. It can achieve better performance compared with the existing algorithms in the block-sparse system identification. To further enhance the performance of the algorithm, the shrinkage BS-NNCNSAF (SH-BS-NNCNSAF) algorithm is proposed. The proposed SH-BS-NNCNSAF algorithm is derived by taking the priori and the posteriori subband errors to achieve the time-varying subband step sizes. Finally, simulations have been carried out to verify the performance of proposed algorithms. The simulation results verify that the proposed algorithms improve the performance of the filter, in terms of system identification in sparse systems.

  • robust set membership Normalized subband adaptive filtering algorithms and their application to acoustic echo cancellation
    IEEE Transactions on Circuits and Systems I-regular Papers, 2017
    Co-Authors: Zongsheng Zheng, Haiquan Zhao, Yi Yu, Lu Lu
    Abstract:

    This paper presents a family of robust set-membership Normalized subband adaptive filtering (RSM-NSAF) algorithms for acoustic echo cancellation (AEC). By using a new robust set-membership error bound, the RSM-NSAF algorithm obtains improved robustness against impulsive noises and decreased steady-state misalignment relative to the conventional set-membership NSAF (SM-NSAF) algorithm. To exploit the sparsity of the impulse response, the $L_{0}$ Norm Constraint robust set-membership NSAF ( $L_{0}$ -RSM-NSAF), robust set-membership improved proportionate NSAF (RSM-IPNSAF), and $L_{0}$ Norm Constraint robust set-membership improved proportionate NSAF ( $L_{0}$ -RSM-IPNSAF) algorithms are derived by minimizing a differentiable cost function that utilizes the Riemannian distance between the updated and previous weight vectors as well as the $L_{0}$ Norm of the weighted updated weight vector. Simulations in AEC application confirm the improvements of the proposed algorithms in performance.

  • sparse Normalized subband adaptive filter algorithm with l0 Norm Constraint
    Journal of The Franklin Institute-engineering and Applied Mathematics, 2016
    Co-Authors: Haiquan Zhao, Badong Chen
    Abstract:

    Abstract In order to improve the filter׳s performance when identifying sparse system, this paper develops two sparse-aware algorithms by incorporating the l0-Norm Constraint of the weight vector into the conventional Normalized subband adaptive filter (NSAF) algorithm. The first algorithm is obtained from the principle of the minimum perturbation; and the second one is based on the gradient descent principle. The resulting algorithms have almost the same convergence and steady-state performance while the latter saves computational complexity. What׳s more, the performance of both algorithms is analyzed by resorting to some assumptions commonly used in the analyses of adaptive algorithms. Simulation results in the context of sparse system identification not only demonstrate the effectiveness of the proposed algorithms, but also verify the theoretical analyses.

Shuicheng Yan - One of the best experts on this subject based on the ideXlab platform.

  • jointly learning structured analysis discriminative dictionary and analysis multiclass classifier
    arXiv: Computer Vision and Pattern Recognition, 2019
    Co-Authors: Zhao Zhang, Weiming Jiang, Jie Qin, Li Zhang, Min Zhang, Shuicheng Yan
    Abstract:

    In this paper, we propose an analysis mechanism based structured Analysis Discriminative Dictionary Learning (ADDL) framework. ADDL seamlessly integrates the analysis discriminative dictionary learning, analysis representation and analysis classifier training into a unified model. The applied analysis mechanism can make sure that the learnt dictionaries, representations and linear classifiers over different classes are independent and discriminating as much as possible. The dictionary is obtained by minimizing a reconstruction error and an analytical incoherence promoting term that encourages the sub-dictionaries associated with different classes to be independent. To obtain the representation coefficients, ADDL imposes a sparse l2,1-Norm Constraint on the coding coefficients instead of using l0 or l1-Norm, since the l0 or l1-Norm Constraint applied in most existing DL criteria makes the training phase time consuming. The codes-extraction projection that bridges data with the sparse codes by extracting special features from the given samples is calculated via minimizing a sparse codes approximation term. Then we compute a linear classifier based on the approximated sparse codes by an analysis mechanism to simultaneously consider the classification and representation powers. Thus, the classification approach of our model is very efficient, because it can avoid the extra time-consuming sparse reconstruction process with trained dictionary for each new test data as most existing DL algorithms. Simulations on real image databases demonstrate that our ADDL model can obtain superior performance over other state-of-the-arts.

  • jointly learning structured analysis discriminative dictionary and analysis multiclass classifier
    IEEE Transactions on Neural Networks, 2018
    Co-Authors: Zhao Zhang, Weiming Jiang, Jie Qin, Li Zhang, Min Zhang, Shuicheng Yan
    Abstract:

    In this paper, we propose an analysis mechanism-based structured analysis discriminative dictionary learning analysis discriminative dictionary learning, framework. The ADDL seamlessly integrates ADDL, analysis representation, and analysis classifier training into a unified model. The applied analysis mechanism can make sure that the learned dictionaries, representations, and linear classifiers over different classes are independent and discriminating as much as possible. The dictionary is obtained by minimizing a reconstruction error and an analytical incoherence promoting term that encourages the subdictionaries associated with different classes to be independent. To obtain the representation coefficients, ADDL imposes a sparse $l_{2,1}$ -Norm Constraint on the coding coefficients instead of using $l_{0}$ or $l_{1}$ Norm, since the $l_{0}$ - or $l_{1}$ -Norm Constraint applied in most existing DL criteria makes the training phase time consuming. The code-extraction projection that bridges data with the sparse codes by extracting special features from the given samples is calculated via minimizing a sparse code approximation term. Then we compute a linear classifier based on the approximated sparse codes by an analysis mechanism to simultaneously consider the classification and representation powers. Thus, the classification approach of our model is very efficient, because it can avoid the extra time-consuming sparse reconstruction process with trained dictionary for each new test data as most existing DL algorithms. Simulations on real image databases demonstrate that our ADDL model can obtain superior performance over other state of the arts.

F Tong - One of the best experts on this subject based on the ideXlab platform.

  • fast communication gradient optimization p Norm like Constraint lms algorithm for sparse system estimation
    Signal Processing, 2013
    Co-Authors: F Tong
    Abstract:

    In order to improve the sparsity exploitation performance of Norm Constraint least mean square (LMS) algorithms, a novel adaptive algorithm is proposed by introducing a variable p-Norm-like Constraint into the cost function of the LMS algorithm, which exerts a zero attraction to the weight updating iterations. The parameter p of the p-Norm-like Constraint is adjusted iteratively along the negative gradient direction of the cost function. Numerical simulations show that the proposed algorithm has better performance than traditional l"0 and l"1 Norm Constraint LMS algorithms.

  • non uniform Norm Constraint lms algorithm for sparse system identification
    IEEE Communications Letters, 2013
    Co-Authors: F Tong
    Abstract:

    Sparsity property has long been exploited to improve the performance of least mean square (LMS) based identification of sparse systems, in the form of l0-Norm or l1-Norm Constraint. However, there is a lack of theoretical investigations regarding the optimum Norm Constraint for specific system with different sparsity. This paper presents an approach by seeking the tradeoff between the sparsity exploitation effect of Norm Constraint and the estimation bias it produces, from which a novel algorithm is derived to modify the cost function of classic LMS algorithm with a non-uniform Norm (p-Norm like) penalty. This modification is equivalent to impose a sequence of l0-Norm or l1-Norm zero attraction elements on the iteration according to the relative value of each filter coefficient among all the entries. The superiorities of the proposed method including improved convergence rate as well as better tolerance upon different sparsity are demonstrated by numerical simulations.