Usual Euclidean

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 5148 Experts worldwide ranked by ideXlab platform

Angshul Majumdar - One of the best experts on this subject based on the ideXlab platform.

  • nuclear norm regularized robust dictionary learning for energy disaggregation
    European Signal Processing Conference, 2016
    Co-Authors: Megha Gupta, Angshul Majumdar
    Abstract:

    The goal of this work is energy disaggregation. A recent work showed that instead of employing the Usual Euclidean norm cost function for dictionary learning, better results can be achieved by learning the dictionaries in a robust fashion by employing an l 1 -norm cost function; this is because energy data is corrupted by large but sparse outliers. In this work we propose to improve the robust dictionary learning approach by imposing low-rank penalty on the learned coefficients. The ensuing formulation is solved using a combination of Split Bregman and Majorization Minimization approach. Experiments on the REDD dataset reveal that our proposed method yields better results than both the robust dictionary learning technique and the recently published work on powerlet energy disaggregation.

  • EUSIPCO - Nuclear norm regularized robust dictionary learning for energy disaggregation
    2016 24th European Signal Processing Conference (EUSIPCO), 2016
    Co-Authors: Megha Gupta, Angshul Majumdar
    Abstract:

    The goal of this work is energy disaggregation. A recent work showed that instead of employing the Usual Euclidean norm cost function for dictionary learning, better results can be achieved by learning the dictionaries in a robust fashion by employing an l 1 -norm cost function; this is because energy data is corrupted by large but sparse outliers. In this work we propose to improve the robust dictionary learning approach by imposing low-rank penalty on the learned coefficients. The ensuing formulation is solved using a combination of Split Bregman and Majorization Minimization approach. Experiments on the REDD dataset reveal that our proposed method yields better results than both the robust dictionary learning technique and the recently published work on powerlet energy disaggregation.

Megha Gupta - One of the best experts on this subject based on the ideXlab platform.

  • nuclear norm regularized robust dictionary learning for energy disaggregation
    European Signal Processing Conference, 2016
    Co-Authors: Megha Gupta, Angshul Majumdar
    Abstract:

    The goal of this work is energy disaggregation. A recent work showed that instead of employing the Usual Euclidean norm cost function for dictionary learning, better results can be achieved by learning the dictionaries in a robust fashion by employing an l 1 -norm cost function; this is because energy data is corrupted by large but sparse outliers. In this work we propose to improve the robust dictionary learning approach by imposing low-rank penalty on the learned coefficients. The ensuing formulation is solved using a combination of Split Bregman and Majorization Minimization approach. Experiments on the REDD dataset reveal that our proposed method yields better results than both the robust dictionary learning technique and the recently published work on powerlet energy disaggregation.

  • EUSIPCO - Nuclear norm regularized robust dictionary learning for energy disaggregation
    2016 24th European Signal Processing Conference (EUSIPCO), 2016
    Co-Authors: Megha Gupta, Angshul Majumdar
    Abstract:

    The goal of this work is energy disaggregation. A recent work showed that instead of employing the Usual Euclidean norm cost function for dictionary learning, better results can be achieved by learning the dictionaries in a robust fashion by employing an l 1 -norm cost function; this is because energy data is corrupted by large but sparse outliers. In this work we propose to improve the robust dictionary learning approach by imposing low-rank penalty on the learned coefficients. The ensuing formulation is solved using a combination of Split Bregman and Majorization Minimization approach. Experiments on the REDD dataset reveal that our proposed method yields better results than both the robust dictionary learning technique and the recently published work on powerlet energy disaggregation.

Steven S. Plotkin - One of the best experts on this subject based on the ideXlab platform.

  • Structural alignment using the generalized Euclidean distance between conformations
    International Journal of Quantum Chemistry, 2009
    Co-Authors: Ali R. Mohazab, Steven S. Plotkin
    Abstract:

    The Usual Euclidean distance may be generalized to extended objects such as polymers or membranes. Here, this distance is used for the first time as a cost function to align structures. We examined the alignment of extended strands to idealized beta-hairpins of various sizes using several cost functions, including RMSD, MRSD, and the minimal distance. We find that using minimal distance as a cost function typically results in an aligned structure that is globally different than that given by an RMSD-based alignment. © 2009 Wiley Periodicals, Inc. Int J Quantum Chem 109: 3217-3228, 2009

  • Minimal Folding Pathways for Coarse-Grained Biopolymer Fragments
    Biophysical journal, 2008
    Co-Authors: Ali R. Mohazab, Steven S. Plotkin
    Abstract:

    The minimal folding pathway or trajectory for a biopolymer can be defined as the transformation that minimizes the total distance traveled between a folded and an unfolded structure. This involves generalizing the Usual Euclidean distance from points to one-dimensional objects such as a polymer. We apply this distance here to find minimal folding pathways for several candidate protein fragments, including the helix, the β-hairpin, and a nonplanar structure where chain noncrossing is important. Comparing the distances traveled with root mean-squared distance and mean root-squared distance, we show that chain noncrossing can have large effects on the kinetic proximity of apparently similar conformations. Structures that are aligned to the β-hairpin by minimizing mean root-squared distance, a quantity that closely approximates the true distance for long chains, show globally different orientation than structures aligned by minimizing root mean-squared distance.

Richard L. Wheeden - One of the best experts on this subject based on the ideXlab platform.

Jérémie Bigot - One of the best experts on this subject based on the ideXlab platform.

  • Frechet means of curves for signal averaging and application to ECG data analysis
    The Annals of Applied Statistics, 2013
    Co-Authors: Jérémie Bigot
    Abstract:

    Signal averaging is the process that consists in computing a mean shape from a set of noisy signals. In the presence of geometric variability in time in the data, the Usual Euclidean mean of the raw data yields a mean pattern that does not reflect the typical shape of the observed signals. In this setting, it is necessary to use alignment techniques for a precise synchronization of the signals, and then to average the aligned data to obtain a consistent mean shape. In this paper, we study the numerical performances of Frechet means of curves which are extensions of the Usual Euclidean mean to spaces endowed with non-Euclidean metrics. This yields a new algorithm for signal averaging and for the estimation of the time variability of a set of signals. We apply this approach to the analysis of heartbeats from ECG records.

  • Fr\'echet means of curves for signal averaging and application to ECG data analysis
    arXiv: Applications, 2011
    Co-Authors: Jérémie Bigot
    Abstract:

    Signal averaging is the process that consists in computing a mean shape from a set of noisy signals. In the presence of geometric variability in time in the data, the Usual Euclidean mean of the raw data yields a mean pattern that does not reflect the typical shape of the observed signals. In this setting, it is necessary to use alignment techniques for a precise synchronization of the signals, and then to average the aligned data to obtain a consistent mean shape. In this paper, we study the numerical performances of Fr\'echet means of curves which are extensions of the Usual Euclidean mean to spaces endowed with non-Euclidean metrics. This yields a new algorithm for signal averaging without a reference template. We apply this approach to the estimation of a mean heart cycle from ECG records.

  • Fréchet means of curves for signal averaging and application to ECG data analysis
    2011
    Co-Authors: Jérémie Bigot
    Abstract:

    Signal averaging is the process that consists in computing a mean shape from a set of noisy signals. In the presence of geometric variability in time in the data, the Usual Euclidean mean of the raw data yields a mean pattern that does not reflect the typical shape of the observed signals. In this setting, it is necessary to use alignment techniques for a precise synchronization of the signals, and then to average the aligned data to obtain a consistent mean shape. In this paper, we study the numerical performances of Fréchet means of curves which are extensions of the Usual Euclidean mean to spaces endowed with non-Euclidean metrics. This yields a new algorithm for signal averaging without a reference template. We apply this approach to the estimation of a mean heart cycle from ECG records.