Harmonic Mean

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 28863 Experts worldwide ranked by ideXlab platform

Ben Andrews - One of the best experts on this subject based on the ideXlab platform.

  • Harmonic Mean curvature flow and geometric inequalities
    Advances in Mathematics, 2020
    Co-Authors: Ben Andrews
    Abstract:

    Abstract We employ the Harmonic Mean curvature flow of strictly convex closed hypersurfaces in hyperbolic space to prove Alexandrov-Fenchel type inequalities relating quermassintegrals to the total curvature, which is the integral of Gaussian curvature on the hypersurface. The resulting inequality allows us to use the inverse Mean curvature flow to prove Alexandrov-Fenchel inequalities between the total curvature and the area for strictly convex hypersurfaces. Finally, we apply the Harmonic Mean curvature flow to prove a new class of geometric inequalities for h-convex hypersurfaces in hyperbolic space.

  • Harmonic Mean curvature flow and geometric inequalities
    arXiv: Differential Geometry, 2019
    Co-Authors: Ben Andrews
    Abstract:

    In this article, we will use the Harmonic Mean curvature flow to prove a new class of Alexandrov-Fenchel type inequalities for strictly convex hypersurfaces in hyperbolic space in terms of total curvature, which is the integral of Gaussian curvature on the hypersurface. We will also use the Harmonic Mean curvature flow to prove a new class of geometric inequalities for horospherically convex hypersurfaces in hyperbolic space. Using these new Alexandrov-Fenchel type inequalities and the inverse Mean curvature flow, we obtain an Alexandrov-Fenchel inequality for strictly convex hypersurfaces in hyperbolic space, which was previously proved for horospherically convex hypersurfaces by Wang and Xia [44]. Finally, we use the Mean curvature flow to prove a new Heintze-Karcher type inequality for hypersurfaces with positive Ricci curvature in hyperbolic space.

Juliane Sigl - One of the best experts on this subject based on the ideXlab platform.

  • Harmonic Mean iteratively reweighted least squares for low rank matrix recovery
    Journal of Machine Learning Research, 2018
    Co-Authors: Christian Kummerle, Juliane Sigl
    Abstract:

    We propose a new iteratively reweighted least squares (IRLS) algorithm for the recovery of a matrix X ∈ Cd1×d2 of rank r ≪ min(d1, d2) from incomplete linear observations, solving a sequence of low complexity linear problems. The easily implementable algorithm, which we call Harmonic Mean iteratively reweighted least squares (HM-IRLS), optimizes a non-convex Schatten-p quasi-norm penalization to promote low-rankness and carries three major strengths, in particular for the matrix completion setting. First, we observe a remarkable global convergence behavior of the algorithm's iterates to the low-rank matrix for relevant, interesting cases, for which any other state-of-the-art optimization approach fails the recovery. Secondly, HM-IRLS exhibits an empirical recovery probability close to 1 even for a number of measurements very close to the theoretical lower bound r(d1+d2-r), i.e., already for significantly fewer linear observations than any other tractable approach in the literature. Thirdly, HM-IRLS exhibits a locally superlinear rate of convergence (of order 2 - p) if the linear observations fulfill a suitable null space property. While for the first two properties we have so far only strong empirical evidence, we prove the third property as our main theoretical result.

  • Harmonic Mean iteratively reweighted least squares for low rank matrix recovery
    International Conference on Sampling Theory and Applications, 2017
    Co-Authors: Christian Kummerle, Juliane Sigl
    Abstract:

    We propose a new Iteratively Reweighted Least Squares (IRLS) algorithm for the problem of recovering a matrix X ∈ ℝd1 × d2 of rank r ≪ min(d 1 , d 2 ) from incomplete linear observations, solving a sequence of quadratic problems. The easily implementable algorithm, which we call Harmonic Mean Iteratively Reweighted Least Squares (HM-IRLS), is superior compared to state-of-the-art algorithms for the low-rank recovery problem in several performance aspects. More specifically, the strategy HM-IRLS uses to optimize a non-convex Schatten-p penalization to promote low-rankness carries three major strengths, in particular for the matrix completion setting.

  • Harmonic Mean iteratively reweighted least squares for low rank matrix recovery
    arXiv: Numerical Analysis, 2017
    Co-Authors: Christian Kummerle, Juliane Sigl
    Abstract:

    We propose a new iteratively reweighted least squares (IRLS) algorithm for the recovery of a matrix $X \in \mathbb{C}^{d_1\times d_2}$ of rank $r \ll\min(d_1,d_2)$ from incomplete linear observations, solving a sequence of low complexity linear problems. The easily implementable algorithm, which we call Harmonic Mean iteratively reweighted least squares (HM-IRLS), optimizes a non-convex Schatten-$p$ quasi-norm penalization to promote low-rankness and carries three major strengths, in particular for the matrix completion setting. First, we observe a remarkable global convergence behavior of the algorithm's iterates to the low-rank matrix for relevant, interesting cases, for which any other state-of-the-art optimization approach fails the recovery. Secondly, HM-IRLS exhibits an empirical recovery probability close to $1$ even for a number of measurements very close to the theoretical lower bound $r (d_1 +d_2 -r)$, i.e., already for significantly fewer linear observations than any other tractable approach in the literature. Thirdly, HM-IRLS exhibits a locally superlinear rate of convergence (of order $2-p$) if the linear observations fulfill a suitable null space property. While for the first two properties we have so far only strong empirical evidence, we prove the third property as our main theoretical result.

Heng Huang - One of the best experts on this subject based on the ideXlab platform.

  • Harmonic Mean Linear Discriminant Analysis
    IEEE Transactions on Knowledge and Data Engineering, 2019
    Co-Authors: Shuai Zheng, Chris Ding, Feiping Nie, Heng Huang
    Abstract:

    In machine learning and data mining, dimensionality reduction is one of the main tasks. Linear Discriminant Analysis (LDA) is a widely used supervised dimensionality reduction algorithm and it has attracted a lot of research interests. Classical Linear Discriminant Analysis finds a subspace to minimize within-class distance and maximize between-class distance, where between-class distance is computed using arithmetic Mean of all between-class distances. However, arithmetic Mean between-class distance has some limitations. First, arithmetic Mean gives equal weight to all between-class distances, and large between-class distance could dominate the result. Second, it does not consider pairwise between-class distance and thus some classes may overlap with each other in the subspace. In this paper, we propose two formulations of Harmonic Mean based Linear Discriminant Analysis: HLDA and HLDAp, to demonstrate the benefit of Harmonic Mean between-class distance and overcome the limitations of classical LDA. We compare our algorithm with 11 existing single-label algorithms on seven datasets and five existing multi-label algorithms on two datasets. On some single-label experiment data, the classification accuracy absolute percentage increase can reach 39 percent compared to state-of-art existing algorithms; on multi-label data, significant improvement on five evaluation metric has been achieved compared to existing algorithms.

  • a Harmonic Mean linear discriminant analysis for robust image classification
    arXiv: Computer Vision and Pattern Recognition, 2016
    Co-Authors: Shuai Zheng, Chris Ding, Feiping Nie, Heng Huang
    Abstract:

    Linear Discriminant Analysis (LDA) is a widely-used supervised dimensionality reduction method in computer vision and pattern recognition. In null space based LDA (NLDA), a well-known LDA extension, between-class distance is maximized in the null space of the within-class scatter matrix. However, there are some limitations in NLDA. Firstly, for many data sets, null space of within-class scatter matrix does not exist, thus NLDA is not applicable to those datasets. Secondly, NLDA uses arithmetic Mean of between-class distances and gives equal consideration to all between-class distances, which makes larger between-class distances can dominate the result and thus limits the performance of NLDA. In this paper, we propose a Harmonic Mean based Linear Discriminant Analysis, Multi-Class Discriminant Analysis (MCDA), for image classification, which minimizes the reciprocal of weighted Harmonic Mean of pairwise between-class distance. More importantly, MCDA gives higher priority to maximize small between-class distances. MCDA can be extended to multi-label dimension reduction. Results on 7 single-label data sets and 4 multi-label data sets show that MCDA has consistently better performance than 10 other single-label approaches and 4 other multi-label approaches in terms of classification accuracy, macro and micro average F1 score.

Feiping Nie - One of the best experts on this subject based on the ideXlab platform.

  • Harmonic Mean Linear Discriminant Analysis
    IEEE Transactions on Knowledge and Data Engineering, 2019
    Co-Authors: Shuai Zheng, Chris Ding, Feiping Nie, Heng Huang
    Abstract:

    In machine learning and data mining, dimensionality reduction is one of the main tasks. Linear Discriminant Analysis (LDA) is a widely used supervised dimensionality reduction algorithm and it has attracted a lot of research interests. Classical Linear Discriminant Analysis finds a subspace to minimize within-class distance and maximize between-class distance, where between-class distance is computed using arithmetic Mean of all between-class distances. However, arithmetic Mean between-class distance has some limitations. First, arithmetic Mean gives equal weight to all between-class distances, and large between-class distance could dominate the result. Second, it does not consider pairwise between-class distance and thus some classes may overlap with each other in the subspace. In this paper, we propose two formulations of Harmonic Mean based Linear Discriminant Analysis: HLDA and HLDAp, to demonstrate the benefit of Harmonic Mean between-class distance and overcome the limitations of classical LDA. We compare our algorithm with 11 existing single-label algorithms on seven datasets and five existing multi-label algorithms on two datasets. On some single-label experiment data, the classification accuracy absolute percentage increase can reach 39 percent compared to state-of-art existing algorithms; on multi-label data, significant improvement on five evaluation metric has been achieved compared to existing algorithms.

  • beyond trace ratio weighted Harmonic Mean of trace ratios for multiclass discriminant analysis
    IEEE Transactions on Knowledge and Data Engineering, 2017
    Co-Authors: Feiping Nie, Xiaojun Chang, Yi Yang
    Abstract:

    Linear discriminant analysis (LDA) is one of the most important supervised linear dimensional reduction techniques which seeks to learn low-dimensional representation from the original high-dimensional feature space through a transformation matrix, while preserving the discriminative information via maximizing the between-class scatter matrix and minimizing the within class scatter matrix. However, the conventional LDA is formulated to maximize the arithmetic Mean of trace ratios which suffers from the domination of the largest objectives and might deteriorate the recognition accuracy in practical applications with a large number of classes. In this paper, we propose a new criterion to maximize the weighted Harmonic Mean of trace ratios, which effectively avoid the domination problem while did not raise any difficulties in the formulation. An efficient algorithm is exploited to solve the proposed challenging problems with fast convergence, which might always find the globally optimal solution just using eigenvalue decomposition in each iteration. Finally, we conduct extensive experiments to illustrate the effectiveness and superiority of our method over both of synthetic datasets and real-life datasets for various tasks, including face recognition, human motion recognition and head pose recognition. The experimental results indicate that our algorithm consistently outperforms other compared methods on all of the datasets.

  • a Harmonic Mean linear discriminant analysis for robust image classification
    arXiv: Computer Vision and Pattern Recognition, 2016
    Co-Authors: Shuai Zheng, Chris Ding, Feiping Nie, Heng Huang
    Abstract:

    Linear Discriminant Analysis (LDA) is a widely-used supervised dimensionality reduction method in computer vision and pattern recognition. In null space based LDA (NLDA), a well-known LDA extension, between-class distance is maximized in the null space of the within-class scatter matrix. However, there are some limitations in NLDA. Firstly, for many data sets, null space of within-class scatter matrix does not exist, thus NLDA is not applicable to those datasets. Secondly, NLDA uses arithmetic Mean of between-class distances and gives equal consideration to all between-class distances, which makes larger between-class distances can dominate the result and thus limits the performance of NLDA. In this paper, we propose a Harmonic Mean based Linear Discriminant Analysis, Multi-Class Discriminant Analysis (MCDA), for image classification, which minimizes the reciprocal of weighted Harmonic Mean of pairwise between-class distance. More importantly, MCDA gives higher priority to maximize small between-class distances. MCDA can be extended to multi-label dimension reduction. Results on 7 single-label data sets and 4 multi-label data sets show that MCDA has consistently better performance than 10 other single-label approaches and 4 other multi-label approaches in terms of classification accuracy, macro and micro average F1 score.

Christian Kummerle - One of the best experts on this subject based on the ideXlab platform.

  • Harmonic Mean iteratively reweighted least squares for low rank matrix recovery
    Journal of Machine Learning Research, 2018
    Co-Authors: Christian Kummerle, Juliane Sigl
    Abstract:

    We propose a new iteratively reweighted least squares (IRLS) algorithm for the recovery of a matrix X ∈ Cd1×d2 of rank r ≪ min(d1, d2) from incomplete linear observations, solving a sequence of low complexity linear problems. The easily implementable algorithm, which we call Harmonic Mean iteratively reweighted least squares (HM-IRLS), optimizes a non-convex Schatten-p quasi-norm penalization to promote low-rankness and carries three major strengths, in particular for the matrix completion setting. First, we observe a remarkable global convergence behavior of the algorithm's iterates to the low-rank matrix for relevant, interesting cases, for which any other state-of-the-art optimization approach fails the recovery. Secondly, HM-IRLS exhibits an empirical recovery probability close to 1 even for a number of measurements very close to the theoretical lower bound r(d1+d2-r), i.e., already for significantly fewer linear observations than any other tractable approach in the literature. Thirdly, HM-IRLS exhibits a locally superlinear rate of convergence (of order 2 - p) if the linear observations fulfill a suitable null space property. While for the first two properties we have so far only strong empirical evidence, we prove the third property as our main theoretical result.

  • Harmonic Mean iteratively reweighted least squares for low rank matrix recovery
    International Conference on Sampling Theory and Applications, 2017
    Co-Authors: Christian Kummerle, Juliane Sigl
    Abstract:

    We propose a new Iteratively Reweighted Least Squares (IRLS) algorithm for the problem of recovering a matrix X ∈ ℝd1 × d2 of rank r ≪ min(d 1 , d 2 ) from incomplete linear observations, solving a sequence of quadratic problems. The easily implementable algorithm, which we call Harmonic Mean Iteratively Reweighted Least Squares (HM-IRLS), is superior compared to state-of-the-art algorithms for the low-rank recovery problem in several performance aspects. More specifically, the strategy HM-IRLS uses to optimize a non-convex Schatten-p penalization to promote low-rankness carries three major strengths, in particular for the matrix completion setting.

  • Harmonic Mean iteratively reweighted least squares for low rank matrix recovery
    arXiv: Numerical Analysis, 2017
    Co-Authors: Christian Kummerle, Juliane Sigl
    Abstract:

    We propose a new iteratively reweighted least squares (IRLS) algorithm for the recovery of a matrix $X \in \mathbb{C}^{d_1\times d_2}$ of rank $r \ll\min(d_1,d_2)$ from incomplete linear observations, solving a sequence of low complexity linear problems. The easily implementable algorithm, which we call Harmonic Mean iteratively reweighted least squares (HM-IRLS), optimizes a non-convex Schatten-$p$ quasi-norm penalization to promote low-rankness and carries three major strengths, in particular for the matrix completion setting. First, we observe a remarkable global convergence behavior of the algorithm's iterates to the low-rank matrix for relevant, interesting cases, for which any other state-of-the-art optimization approach fails the recovery. Secondly, HM-IRLS exhibits an empirical recovery probability close to $1$ even for a number of measurements very close to the theoretical lower bound $r (d_1 +d_2 -r)$, i.e., already for significantly fewer linear observations than any other tractable approach in the literature. Thirdly, HM-IRLS exhibits a locally superlinear rate of convergence (of order $2-p$) if the linear observations fulfill a suitable null space property. While for the first two properties we have so far only strong empirical evidence, we prove the third property as our main theoretical result.