Affinity Matrix - Explore the Science & Experts | ideXlab

Scan Science and Technology

Contact Leading Edge Experts & Companies

Affinity Matrix

The Experts below are selected from a list of 324 Experts worldwide ranked by ideXlab platform

Affinity Matrix – Free Register to Access Experts & Abstracts

Shengli Xie – One of the best experts on this subject based on the ideXlab platform.

  • Flexible Affinity Matrix Learning for Unsupervised and Semisupervised Classification
    IEEE transactions on neural networks and learning systems, 2018
    Co-Authors: Xiaozhao Fang, Na Han, Wai Keung Wong, Shaohua Teng, Shengli Xie
    Abstract:

    In this paper, we propose a unified model called flexible Affinity Matrix learning (FAML) for unsupervised and semisupervised classification by exploiting both the relationship among data and the clustering structure simultaneously. To capture the relationship among data, we exploit the self-expressiveness property of data to learn a structured Matrix in which the structures are induced by different norms. A rank constraint is imposed on the Laplacian Matrix of the desired Affinity Matrix, so that the connected components of data are exactly equal to the cluster number. Thus, the clustering structure is explicit in the learned Affinity Matrix. By making the estimated Affinity Matrix approximate the structured Matrix during the learning procedure, FAML allows the Affinity Matrix itself to be adaptively adjusted such that the learned Affinity Matrix can well capture both the relationship among data and the clustering structure. Thus, FAML has the potential to perform better than other related methods. We derive optimization algorithms to solve the corresponding problems. Extensive unsupervised and semisupervised classification experiments on both synthetic data and real-world benchmark data sets show that the proposed FAML consistently outperforms the state-of-the-art methods.

  • Sparse subspace clustering with jointly learning representation and Affinity Matrix
    Journal of the Franklin Institute, 2018
    Co-Authors: Ming Yin, Deyu Zeng, Shengli Xie
    Abstract:

    Abstract In recent years, sparse subspace clustering (SSC) has been witnessed to its advantages in subspace clustering field. Generally, the SSC first learns the representation Matrix of data by self-expressive, and then constructs Affinity Matrix based on the obtained sparse representation. Finally, the clustering result is achieved by applying spectral clustering to the Affinity Matrix. As described above, the existing SSC algorithms often learn the sparse representation and Affinity Matrix in a separate way. As a result, it may not lead to the optimum clustering result because of the independence process. To this end, we proposed a novel clustering algorithm via learning representation and Affinity Matrix conjointly. By the proposed method, we can learn sparse representation and Affinity Matrix in a unified framework, where the procedure is conducted by using the graph regularizer derived from the Affinity Matrix. Experimental results show the proposed method achieves better clustering results compared to other subspace clustering approaches.

Yicong Zhou – One of the best experts on this subject based on the ideXlab platform.

  • Multi-view subspace clustering via simultaneously learning the representation tensor and Affinity Matrix
    Pattern Recognition, 2020
    Co-Authors: Yongyong Chen, Xiaolin Xiao, Yicong Zhou
    Abstract:

    Abstract Multi-view subspace clustering aims at separating data points into multiple underlying subspaces according to their multi-view features. Existing low-rank tensor representation-based multi-view subspace clustering algorithms are robust to noise and can preserve the high-order correlations of multi-view features. However, they may suffer from two common problems: (1) the local structures and different importance of each view feature are often neglected; (2) the low-rank representation tensor and Affinity Matrix are learned separately. To address these issues, we propose a unified framework to learn the Graph regularized Low-rank representation Tensor and Affinity Matrix (GLTA) for multi-view subspace clustering. In the proposed GLTA framework, the tensor singular value decomposition-based tensor nuclear norm is adopted to explore the high-order cross-view correlations. The manifold regularization is exploited to preserve the local structures embedded in high-dimensional space. The importance of different features is automatically measured when constructing the final Affinity Matrix. An iterative algorithm is developed to solve GLTA using the alternating direction method of multipliers. Extensive experiments on seven challenging datasets demonstrate the superiority of GLTA over the state-of-the-art methods.

  • Jointly Learning Kernel Representation Tensor and Affinity Matrix for Multi-View Clustering
    IEEE Transactions on Multimedia, 2020
    Co-Authors: Yongyong Chen, Xiaolin Xiao, Yicong Zhou
    Abstract:

    Multi-view clustering refers to the task of partitioning numerous unlabeled multimedia data into several distinct clusters using multiple features. In this paper, we propose a novel nonlinear method called joint learning multi-view clustering (JLMVC) to jointly learn kernel representation tensor and Affinity Matrix. The proposed JLMVC has three advantages: (1) unlike existing low-rank representation-based multi-view clustering methods that learn the representation tensor and Affinity Matrix in two separate steps, JLMVC jointly learns them both; (2) using the “kernel trick,” JLMVC can handle nonlinear data structures for various real applications; and (3) different from most existing methods that treat representations of all views equally, JLMVC automatically learns a reasonable weight for each view. Based on the alternating direction method of multipliers, an effective algorithm is designed to solve the proposed model. Extensive experiments on eight multimedia datasets demonstrate the superiority of the proposed JLMVC over state-of-the-art methods.

  • ICME – Multi-view Clustering via Simultaneously Learning Graph Regularized Low-Rank Tensor Representation and Affinity Matrix
    2019 IEEE International Conference on Multimedia and Expo (ICME), 2019
    Co-Authors: Yongyong Chen, Xiaolin Xiao, Yicong Zhou
    Abstract:

    Low-rank tensor representation-based multi-view clustering has become an efficient method for data clustering due to the robustness to noise and the preservation of the high order correlation. However, existing algorithms may suffer from two common problems: (1) the local view-specific geometrical structures and the various importance of features in different views are neglected; (2) the low-rank representation tensor and the Affinity Matrix are learned separately. To address these issues, we propose a novel framework to learn the Graph regularized Low-rank Tensor representation and the Affinity Matrix (GLTA) in a unified manner. Besides, the manifold regularization is exploited to preserve the view-specific geometrical structures, and the various importance of different features is automatically calculated when constructing the final Affinity Matrix. An efficient algorithm is designed to solve GLTA using the augmented Lagrangian multiplier. Extensive experiments on six real datasets demonstrate the superiority of GLTA over the state-of-the-arts.

Wai Keung Wong – One of the best experts on this subject based on the ideXlab platform.

  • Flexible Affinity Matrix Learning for Unsupervised and Semisupervised Classification
    IEEE transactions on neural networks and learning systems, 2018
    Co-Authors: Xiaozhao Fang, Na Han, Wai Keung Wong, Shaohua Teng, Shengli Xie
    Abstract:

    In this paper, we propose a unified model called flexible Affinity Matrix learning (FAML) for unsupervised and semisupervised classification by exploiting both the relationship among data and the clustering structure simultaneously. To capture the relationship among data, we exploit the self-expressiveness property of data to learn a structured Matrix in which the structures are induced by different norms. A rank constraint is imposed on the Laplacian Matrix of the desired Affinity Matrix, so that the connected components of data are exactly equal to the cluster number. Thus, the clustering structure is explicit in the learned Affinity Matrix. By making the estimated Affinity Matrix approximate the structured Matrix during the learning procedure, FAML allows the Affinity Matrix itself to be adaptively adjusted such that the learned Affinity Matrix can well capture both the relationship among data and the clustering structure. Thus, FAML has the potential to perform better than other related methods. We derive optimization algorithms to solve the corresponding problems. Extensive unsupervised and semisupervised classification experiments on both synthetic data and real-world benchmark data sets show that the proposed FAML consistently outperforms the state-of-the-art methods.

  • Robust Semi-Supervised Subspace Clustering via Non-Negative Low-Rank Representation
    IEEE transactions on cybernetics, 2015
    Co-Authors: Xiaozhao Fang, Zhihui Lai, Wai Keung Wong
    Abstract:

    Low-rank representation (LRR) has been successfully applied in exploring the subspace structures of data. However, in previous LRR-based semi-supervised subspace clustering methods, the label information is not used to guide the Affinity Matrix construction so that the Affinity Matrix cannot deliver strong discriminant information. Moreover, these methods cannot guarantee an overall optimum since the Affinity Matrix construction and subspace clustering are often independent steps. In this paper, we propose a robust semi-supervised subspace clustering method based on non-negative LRR (NNLRR) to address these problems. By combining the LRR framework and the Gaussian fields and harmonic functions method in a single optimization problem, the supervision information is explicitly incorporated to guide the Affinity Matrix construction and the Affinity Matrix construction and subspace clustering are accomplished in one step to guarantee the overall optimum. The Affinity Matrix is obtained by seeking a non-negative low-rank Matrix that represents each sample as a linear combination of others. We also explicitly impose the sparse constraint on the Affinity Matrix such that the Affinity Matrix obtained by NNLRR is non-negative low-rank and sparse. We introduce an efficient linearized alternating direction method with adaptive penalty to solve the corresponding optimization problem. Extensive experimental results demonstrate that NNLRR is effective in semi-supervised subspace clustering and robust to different types of noise than other state-of-the-art methods.