Linear Discriminant

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 50727 Experts worldwide ranked by ideXlab platform

Yuan-hai Shao - One of the best experts on this subject based on the ideXlab platform.

  • Two-dimensional Bhattacharyya bound Linear Discriminant analysis with its applications.
    arXiv: Learning, 2020
    Co-Authors: Yan-ru Guo, Yan-qin Bai, Lan Bai, Yuan-hai Shao
    Abstract:

    Recently proposed L2-norm Linear Discriminant analysis criterion via the Bhattacharyya error bound estimation (L2BLDA) is an effective improvement of Linear Discriminant analysis (LDA) for feature extraction. However, L2BLDA is only proposed to cope with vector input samples. When facing with two-dimensional (2D) inputs, such as images, it will lose some useful information, since it does not consider intrinsic structure of images. In this paper, we extend L2BLDA to a two-dimensional Bhattacharyya bound Linear Discriminant analysis (2DBLDA). 2DBLDA maximizes the matrix-based between-class distance which is measured by the weighted pairwise distances of class means and meanwhile minimizes the matrix-based within-class distance. The weighting constant between the between-class and within-class terms is determined by the involved data that makes the proposed 2DBLDA adaptive. In addition, the criterion of 2DBLDA is equivalent to optimizing an upper bound of the Bhattacharyya error. The construction of 2DBLDA makes it avoid the small sample size problem while also possess robustness, and can be solved through a simple standard eigenvalue decomposition problem. The experimental results on image recognition and face image reconstruction demonstrate the effectiveness of the proposed methods.

  • Robust and Sparse Linear Discriminant Analysis via an Alternating Direction Method of Multipliers
    IEEE Transactions on Neural Networks and Learning Systems, 2020
    Co-Authors: Chun-na Li, Yuan-hai Shao
    Abstract:

    In this paper, we propose a robust Linear Discriminant analysis (RLDA) through Bhattacharyya error bound optimization. RLDA considers a nonconvex problem with the L1-norm operation that makes it less sensitive to outliers and noise than the L2-norm Linear Discriminant analysis (LDA). In addition, we extend our RLDA to a sparse model (RSLDA). Both RLDA and RSLDA can extract unbounded numbers of features and avoid the small sample size (SSS) problem, and an alternating direction method of multipliers (ADMM) is used to cope with the nonconvexity in the proposed formulations. Compared with the traditional LDA, our RLDA and RSLDA are more robust to outliers and noise, and RSLDA can obtain sparse Discriminant directions. These findings are supported by experiments on artificial data sets as well as human face databases.

  • mblda a novel multiple between class Linear Discriminant analysis
    Information Sciences, 2016
    Co-Authors: Zhen Wang, Yuan-hai Shao, Lan Bai, Liming Liu, Nai-yang Deng
    Abstract:

    Abstract Linear Discriminant analysis (LDA) with its extensions is a group of classical methods in dimensionality reduction for supervised learning. However, when some classes are far away from the others, it may be difficult to find the optimal direction by LDA because of the average between-class scatter. Moreover, LDAs are always time consuming for high dimensional problem since the involved generalized eigenvalue problem is needed to be solved. In this paper, a multiple between-class Linear Discriminant analysis (MBLDA) is proposed for dimensionality reduction. MBLDA finds the transformation directions by approximating the solution to a min-max programming problem, leading to well separability in the reduced space with a fast learning speed on the high-dimensional problem. It is proved theoretically that the proposed method can deal with the special generalized eigenvalue problem by solving a underdetermined homogeneous system of Linear equations. Experimental results on the artificial and benchmark datasets show that MBLDA can not only reduce the dimension to a powerful Linear Discriminant level but also have a fast learning speed.

  • Robust L1-norm two-dimensional Linear Discriminant analysis
    Neural networks : the official journal of the International Neural Network Society, 2015
    Co-Authors: Yuan-hai Shao, Nai-yang Deng
    Abstract:

    In this paper, we propose an L1-norm two-dimensional Linear Discriminant analysis (L1-2DLDA) with robust performance. Different from the conventional two-dimensional Linear Discriminant analysis with L2-norm (L2-2DLDA), where the optimization problem is transferred to a generalized eigenvalue problem, the optimization problem in our L1-2DLDA is solved by a simple justifiable iterative technique, and its convergence is guaranteed. Compared with L2-2DLDA, our L1-2DLDA is more robust to outliers and noises since the L1-norm is used. This is supported by our preliminary experiments on toy example and face datasets, which show the improvement of our L1-2DLDA over L2-2DLDA.

Trevor Hastie - One of the best experts on this subject based on the ideXlab platform.

  • regularized Linear Discriminant analysis and its application in microarrays
    Biostatistics, 2007
    Co-Authors: Trevor Hastie, Robert Tibshirani
    Abstract:

    SUMMARY In this paper, we introduce a modified version of Linear Discriminant analysis, called the “shrunken centroids regularized Discriminant analysis” (SCRDA). This method generalizes the idea of the “nearest shrunken centroids” (NSC) (Tibshirani and others, 2003) into the classical Discriminant analysis. The SCRDA method is specially designed for classification problems in high dimension low sample size situations, for example, microarray data. Through both simulated data and real life data, it is shown that this method performs very well in multivariate classification problems, often outperforms the PAM method (using the NSC algorithm) and can be as competitive as the support vector machines classifiers. It is also suitable for feature elimination purpose and can be used as gene selection method. The open source R package for this method (named “rda”) is available on CRAN (http://www.r-project.org) for download and testing.

  • Functional Linear Discriminant analysis for irregularly sampled curves
    Journal of the Royal Statistical Society: Series B (Statistical Methodology), 2001
    Co-Authors: Gareth M. James, Trevor Hastie
    Abstract:

    We introduce a technique for extending the classical method of Linear Discriminant analysis (LDA) to data sets where the predictor variables are curves or functions. This procedure, which we call functional Linear Discriminant analysis (FLDA), is particularly useful when only fragments of the curves are observed. All the techniques associated with LDA can be extended for use with FLDA. In particular FLDA can be used to produce classifications on new (test) curves, give an estimate of the Discriminant function between classes and provide a one- or two-dimensional pictorial representation of a set of curves. We also extend this procedure to provide generalizations of quadratic and regularized Discriminant analysis.

Nai-yang Deng - One of the best experts on this subject based on the ideXlab platform.

  • mblda a novel multiple between class Linear Discriminant analysis
    Information Sciences, 2016
    Co-Authors: Zhen Wang, Yuan-hai Shao, Lan Bai, Liming Liu, Nai-yang Deng
    Abstract:

    Abstract Linear Discriminant analysis (LDA) with its extensions is a group of classical methods in dimensionality reduction for supervised learning. However, when some classes are far away from the others, it may be difficult to find the optimal direction by LDA because of the average between-class scatter. Moreover, LDAs are always time consuming for high dimensional problem since the involved generalized eigenvalue problem is needed to be solved. In this paper, a multiple between-class Linear Discriminant analysis (MBLDA) is proposed for dimensionality reduction. MBLDA finds the transformation directions by approximating the solution to a min-max programming problem, leading to well separability in the reduced space with a fast learning speed on the high-dimensional problem. It is proved theoretically that the proposed method can deal with the special generalized eigenvalue problem by solving a underdetermined homogeneous system of Linear equations. Experimental results on the artificial and benchmark datasets show that MBLDA can not only reduce the dimension to a powerful Linear Discriminant level but also have a fast learning speed.

  • Robust L1-norm two-dimensional Linear Discriminant analysis
    Neural networks : the official journal of the International Neural Network Society, 2015
    Co-Authors: Yuan-hai Shao, Nai-yang Deng
    Abstract:

    In this paper, we propose an L1-norm two-dimensional Linear Discriminant analysis (L1-2DLDA) with robust performance. Different from the conventional two-dimensional Linear Discriminant analysis with L2-norm (L2-2DLDA), where the optimization problem is transferred to a generalized eigenvalue problem, the optimization problem in our L1-2DLDA is solved by a simple justifiable iterative technique, and its convergence is guaranteed. Compared with L2-2DLDA, our L1-2DLDA is more robust to outliers and noises since the L1-norm is used. This is supported by our preliminary experiments on toy example and face datasets, which show the improvement of our L1-2DLDA over L2-2DLDA.

Christophe Croux - One of the best experts on this subject based on the ideXlab platform.

  • robust Linear Discriminant analysis for multiple groups influence and classification efficiencies
    Social Science Research Network, 2005
    Co-Authors: Christophe Croux, Peter Filzmoser, Kristel Joossens
    Abstract:

    Linear Discriminant analysis for multiple groups is typically carried out using Fisher's method. This method relies on the sample averages and covariance matrices computed from the different groups constituting the training sample. Since sample averages and covariance matrices are not robust, it is proposed to use robust estimators of location and covariance instead, yielding a robust version of Fisher's method. In this paper expressions are derived for the influence that an observation in the training set has on the error rate of the Fisher method for multiple Linear Discriminant analysis. These influence functions on the error rate turn out to be unbounded for the classical rule, but bounded when using a robust approach. Using these influence functions, we compute relative classification efficiencies of the robust procedures with respect to the classical method. It is shown that, by using an appropriate robust estimator, the loss in classification efficiency at the normal model remains limited. These findings are confirmed by finite sample simulations.

Tong Heng Lee - One of the best experts on this subject based on the ideXlab platform.

  • face recognition using recursive fisher Linear Discriminant
    IEEE Transactions on Image Processing, 2006
    Co-Authors: Cheng Xiang, X A Fan, Tong Heng Lee
    Abstract:

    Fisher Linear Discriminant (FLD) has recently emerged as a more efficient approach for extracting features for many pattern classification problems as compared to traditional principal component analysis. However, the constraint on the total number of features available from FLD has seriously limited its application to a large class of problems. In order to overcome this disadvantage, a recursive procedure of calculating the Discriminant features is suggested in this paper. The new algorithm incorporates the same fundamental idea behind FLD of seeking the projection that best separates the data corresponding to different classes, while in contrast to FLD the number of features that may be derived is independent of the number of the classes to be recognized. Extensive experiments of comparing the new algorithm with the traditional approaches have been carried out on face recognition problem with the Yale database, in which the resulting improvement of the performances by the new feature extraction scheme is significant

  • face recognition using recursive fisher Linear Discriminant
    International Conference on Communications Circuits and Systems, 2004
    Co-Authors: Cheng Xiang, X A Fan, Tong Heng Lee
    Abstract:

    The Fisher Linear Discriminant (FLD) has recently emerged as a more efficient approach for extracting features for many pattern classification problems than traditional principal component analysis (PCA). However, the constraint on the total number of features available from FLD has seriously limited its application to a large class of problems. In order to overcome this disadvantage of FLD, a recursive procedure for calculating the Discriminant features is suggested in this paper. Extensive experiments of comparing the new algorithm with the traditional PCA and FLD approaches have been carried out on a face recognition problem, in which the resulting improvement of the performance by the new feature extraction scheme is significant.