Sparse Coding

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 17829 Experts worldwide ranked by ideXlab platform

Liangtien Chia - One of the best experts on this subject based on the ideXlab platform.

  • laplacian Sparse Coding hypergraph laplacian Sparse Coding and applications
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013
    Co-Authors: Ivor W Tsang, Liangtien Chia
    Abstract:

    Sparse Coding exhibits good performance in many computer vision applications. However, due to the overcomplete codebook and the independent Coding process, the locality and the similarity among the instances to be encoded are lost. To preserve such locality and similarity information, we propose a Laplacian Sparse Coding (LSc) framework. By incorporating the similarity preserving term into the objective of Sparse Coding, our proposed Laplacian Sparse Coding can alleviate the instability of Sparse codes. Furthermore, we propose a Hypergraph Laplacian Sparse Coding (HLSc), which extends our Laplacian Sparse Coding to the case where the similarity among the instances defined by a hypergraph. Specifically, this HLSc captures the similarity among the instances within the same hyperedge simultaneously, and also makes the Sparse codes of them be similar to each other. Both Laplacian Sparse Coding and Hypergraph Laplacian Sparse Coding enhance the robustness of Sparse Coding. We apply the Laplacian Sparse Coding to feature quantization in Bag-of-Words image representation, and it outperforms Sparse Coding and achieves good performance in solving the image classification problem. The Hypergraph Laplacian Sparse Coding is also successfully used to solve the semi-auto image tagging problem. The good performance of these applications demonstrates the effectiveness of our proposed formulations in locality and similarity preservation.

Xin Gao - One of the best experts on this subject based on the ideXlab platform.

  • Semi-Supervised Sparse Coding
    2014 International Joint Conference on Neural Networks (IJCNN), 2014
    Co-Authors: Jim Jing-yan Wang, Xin Gao
    Abstract:

    Sparse Coding approximates the data sample as a Sparse linear combination of some basic codewords and uses the Sparse codes as new presentations. In this paper, we investigate learning discriminative Sparse codes by Sparse Coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned Sparse codes, we assume that the class labels could be predicted from the Sparse codes directly using a linear classifier. By solving the codebook, Sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised Sparse Coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised Sparse Coding methods on partially labeled data sets.

  • Discriminative Sparse Coding on multi-manifolds
    Knowledge-Based Systems, 2013
    Co-Authors: Jim Jing-yan Wang, Halima Bensmail, Nan Yao, Xin Gao
    Abstract:

    Sparse Coding has been popularly used as an effective data representation method in various applications, such as computer vision, medical imaging and bioinformatics. However, the conventional Sparse Coding algorithms and their manifold-regularized variants (graph Sparse Coding and Laplacian Sparse Coding), learn codebooks and codes in an unsupervised manner and neglect class information that is available in the training set. To address this problem, we propose a novel discriminative Sparse Coding method based on multi-manifolds, that learns discriminative class-conditioned codebooks and Sparse codes from both data feature spaces and class labels. First, the entire training set is partitioned into multiple manifolds according to the class labels. Then, we formulate the Sparse Coding as a manifold-manifold matching problem and learn class-conditioned codebooks and codes to maximize the manifold margins of different classes. Lastly, we present a data sample-manifold matching-based strategy to classify the unlabeled data samples. Experimental results on somatic mutations identification and breast tumor classification based on ultrasonic images demonstrate the efficacy of the proposed data representation and classification approach.

Conrad Sanderson - One of the best experts on this subject based on the ideXlab platform.

  • Sparse Coding on symmetric positive definite manifolds using bregman divergences
    IEEE Transactions on Neural Networks, 2016
    Co-Authors: Mehrtash Harandi, Brian C. Lovell, Richard Hartley, Conrad Sanderson
    Abstract:

    This paper introduces Sparse Coding and dictionary learning for symmetric positive definite (SPD) matrices, which are often used in machine learning, computer vision, and related areas. Unlike traditional Sparse Coding schemes that work in vector spaces, in this paper, we discuss how SPD matrices can be described by Sparse combination of dictionary atoms, where the atoms are also SPD matrices. We propose to seek Sparse Coding by embedding the space of SPD matrices into the Hilbert spaces through two types of the Bregman matrix divergences. This not only leads to an efficient way of performing Sparse Coding but also an online and iterative scheme for dictionary learning. We apply the proposed methods to several computer vision tasks where images are represented by region covariance matrices. Our proposed algorithms outperform state-of-the-art methods on a wide range of classification tasks, including face recognition, action recognition, material classification, and texture categorization.

  • Sparse Coding on symmetric positive definite manifolds using bregman divergences
    arXiv: Computer Vision and Pattern Recognition, 2014
    Co-Authors: Mehrtash Harandi, Brian C. Lovell, Richard Hartley, Conrad Sanderson
    Abstract:

    This paper introduces Sparse Coding and dictionary learning for Symmetric Positive Definite (SPD) matrices, which are often used in machine learning, computer vision and related areas. Unlike traditional Sparse Coding schemes that work in vector spaces, in this paper we discuss how SPD matrices can be described by Sparse combination of dictionary atoms, where the atoms are also SPD matrices. We propose to seek Sparse Coding by embedding the space of SPD matrices into Hilbert spaces through two types of Bregman matrix divergences. This not only leads to an efficient way of performing Sparse Coding, but also an online and iterative scheme for dictionary learning. We apply the proposed methods to several computer vision tasks where images are represented by region covariance matrices. Our proposed algorithms outperform state-of-the-art methods on a wide range of classification tasks, including face recognition, action recognition, material classification and texture categorization.

Ivor W Tsang - One of the best experts on this subject based on the ideXlab platform.

  • laplacian Sparse Coding hypergraph laplacian Sparse Coding and applications
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013
    Co-Authors: Ivor W Tsang, Liangtien Chia
    Abstract:

    Sparse Coding exhibits good performance in many computer vision applications. However, due to the overcomplete codebook and the independent Coding process, the locality and the similarity among the instances to be encoded are lost. To preserve such locality and similarity information, we propose a Laplacian Sparse Coding (LSc) framework. By incorporating the similarity preserving term into the objective of Sparse Coding, our proposed Laplacian Sparse Coding can alleviate the instability of Sparse codes. Furthermore, we propose a Hypergraph Laplacian Sparse Coding (HLSc), which extends our Laplacian Sparse Coding to the case where the similarity among the instances defined by a hypergraph. Specifically, this HLSc captures the similarity among the instances within the same hyperedge simultaneously, and also makes the Sparse codes of them be similar to each other. Both Laplacian Sparse Coding and Hypergraph Laplacian Sparse Coding enhance the robustness of Sparse Coding. We apply the Laplacian Sparse Coding to feature quantization in Bag-of-Words image representation, and it outperforms Sparse Coding and achieves good performance in solving the image classification problem. The Hypergraph Laplacian Sparse Coding is also successfully used to solve the semi-auto image tagging problem. The good performance of these applications demonstrates the effectiveness of our proposed formulations in locality and similarity preservation.

Taisong Jin - One of the best experts on this subject based on the ideXlab platform.

  • Multiple graph regularized Sparse Coding and multiple hypergraph regularized Sparse Coding for image representation
    Neurocomputing, 2015
    Co-Authors: Taisong Jin
    Abstract:

    Manifold regularized Sparse Coding shows promising performance for various applications. The key issue that must be considered in the application is how to adaptively select the suitable graph hyper-parameters in manifold learning for the Sparse Coding task. Usually, cross validation is applied, but it does not necessarily scale up and easily leads to overfitting. In this article, multiple graph Sparse Coding (MGrSc) and multiple Hypergraph Sparse Coding (MHGrSc) for image representation are proposed. Inspired by the Ensemble Manifold Regularizer, we formulate multiple graph and multiple Hypergraph regularizers to guarantee the smoothness of Sparse codes along the geodesics of a data manifold, which is characterized by fusing the multiple previously given graph Laplacians or Hypergraph Laplacians. Then, the proposed regularziers, respectively, are incorporated into the traditional Sparse Coding framework, which results in two unified objective functions of Sparse Coding. Alternating optimization is used to optimize the objective functions, and two, novel manifold regularized Sparse Coding algorithms are presented. The proposed two Sparse Coding methods learn both the composite manifold and the Sparse Coding jointly, and it is fully automatic for learning the graph hyper-parameters in the manifold learning. Image clustering tests on real world datasets demonstrated that the proposed Sparse Coding methods are superior to the state-of-the-art methods.