Squared Euclidean Distance

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 2469 Experts worldwide ranked by ideXlab platform

Bixio Rimoldi - One of the best experts on this subject based on the ideXlab platform.

Ranjith Liyanapathirana - One of the best experts on this subject based on the ideXlab platform.

W Firmanto - One of the best experts on this subject based on the ideXlab platform.

  • performance and design of space time coding in fading channels
    IEEE Transactions on Communications, 2003
    Co-Authors: Jinhong Yuan, Zhuo Chen, Branka Vucetic, W Firmanto
    Abstract:

    The pairwise-error probability upper bounds of space-time codes (STCs) in independent Rician fading channels are derived. Based on the performance analysis, novel code design criteria for slow and fast Rayleigh fading channels are developed. It is found that, in fading channels, the STC design criteria depend on the value of the possible diversity gain of the system. In slow fading channels, when the diversity gain is smaller than four, the code error performance is dominated by the minimum rank and the minimum determinant of the codeword Distance matrix. However, when the diversity gain is larger than, or equal to, four, the performance is dominated by the minimum Squared Euclidean Distance. Based on the proposed design criteria, new codes are designed and evaluated by simulation.

Jonathon A Chambers - One of the best experts on this subject based on the ideXlab platform.

Cedric Fevotte - One of the best experts on this subject based on the ideXlab platform.

  • nonlinear hyperspectral unmixing with robust nonnegative matrix factorization
    arXiv: Methodology, 2014
    Co-Authors: Cedric Fevotte, Nicolas Dobigeon
    Abstract:

    This paper introduces a robust mixing model to describe hyperspectral data resulting from the mixture of several pure spectral signatures. This new model not only generalizes the commonly used linear mixing model, but also allows for possible nonlinear effects to be easily handled, relying on mild assumptions regarding these nonlinearities. The standard nonnegativity and sum-to-one constraints inherent to spectral unmixing are coupled with a group-sparse constraint imposed on the nonlinearity component. This results in a new form of robust nonnegative matrix factorization. The data fidelity term is expressed as a beta-divergence, a continuous family of dissimilarity measures that takes the Squared Euclidean Distance and the generalized Kullback-Leibler divergence as special cases. The penalized objective is minimized with a block-coordinate descent that involves majorization-minimization updates. Simulation results obtained on synthetic and real data show that the proposed strategy competes with state-of-the-art linear and nonlinear unmixing methods.

  • automatic relevance determination in nonnegative matrix factorization with the beta divergence
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013
    Co-Authors: Cedric Fevotte
    Abstract:

    This paper addresses the estimation of the latent dimensionality in nonnegative matrix factorization (NMF) with the β-divergence. The β-divergence is a family of cost functions that includes the Squared Euclidean Distance, Kullback-Leibler (KL) and Itakura-Saito (IS) divergences as special cases. Learning the model order is important as it is necessary to strike the right balance between data fidelity and overfitting. We propose a Bayesian model based on automatic relevance determination (ARD) in which the columns of the dictionary matrix and the rows of the activation matrix are tied together through a common scale parameter in their prior. A family of majorization-minimization (MM) algorithms is proposed for maximum a posteriori (MAP) estimation. A subset of scale parameters is driven to a small lower bound in the course of inference, with the effect of pruning the corresponding spurious components. We demonstrate the efficacy and robustness of our algorithms by performing extensive experiments on synthetic data, the swimmer dataset, a music decomposition example, and a stock price prediction task.

  • automatic relevance determination in nonnegative matrix factorization with the beta divergence
    arXiv: Machine Learning, 2011
    Co-Authors: Cedric Fevotte
    Abstract:

    This paper addresses the estimation of the latent dimensionality in nonnegative matrix factorization (NMF) with the \beta-divergence. The \beta-divergence is a family of cost functions that includes the Squared Euclidean Distance, Kullback-Leibler and Itakura-Saito divergences as special cases. Learning the model order is important as it is necessary to strike the right balance between data fidelity and overfitting. We propose a Bayesian model based on automatic relevance determination in which the columns of the dictionary matrix and the rows of the activation matrix are tied together through a common scale parameter in their prior. A family of majorization-minimization algorithms is proposed for maximum a posteriori (MAP) estimation. A subset of scale parameters is driven to a small lower bound in the course of inference, with the effect of pruning the corresponding spurious components. We demonstrate the efficacy and robustness of our algorithms by performing extensive experiments on synthetic data, the swimmer dataset, a music decomposition example and a stock price prediction task.