Rank Tensor

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 19128 Experts worldwide ranked by ideXlab platform

Xiaochun Cao - One of the best experts on this subject based on the ideXlab platform.

  • low Rank Tensor constrained multiview subspace clustering
    International Conference on Computer Vision, 2015
    Co-Authors: Changqing Zhang, Si Liu, Guangcan Liu, Xiaochun Cao
    Abstract:

    In this paper, we explore the problem of multiview subspace clustering. We introduce a low-Rank Tensor constraint to explore the complementary information from multiple views and, accordingly, establish a novel method called Low-Rank Tensor constrained Multiview Subspace Clustering (LT-MSC). Our method regards the subspace representation matrices of different views as a Tensor, which captures dexterously the high order correlations underlying multiview data. Then the Tensor is equipped with a low-Rank constraint, which models elegantly the cross information among different views, reduces effectually the redundancy of the learned subspace representations, and improves the accuracy of clustering as well. The inference process of the affinity matrix for clustering is formulated as a Tensor nuclear norm minimization problem, constrained with an additional L2,1-norm regularizer and some linear equalities. The minimization problem is convex and thus can be solved efficiently by an Augmented Lagrangian Alternating Direction Minimization (AL-ADM) method. Extensive experimental results on four benchmark datasets show the effectiveness of our proposed LT-MSC method.

  • ICCV - Low-Rank Tensor Constrained Multiview Subspace Clustering
    2015 IEEE International Conference on Computer Vision (ICCV), 2015
    Co-Authors: Changqing Zhang, Si Liu, Guangcan Liu, Xiaochun Cao
    Abstract:

    In this paper, we explore the problem of multiview subspace clustering. We introduce a low-Rank Tensor constraint to explore the complementary information from multiple views and, accordingly, establish a novel method called Low-Rank Tensor constrained Multiview Subspace Clustering (LT-MSC). Our method regards the subspace representation matrices of different views as a Tensor, which captures dexterously the high order correlations underlying multiview data. Then the Tensor is equipped with a low-Rank constraint, which models elegantly the cross information among different views, reduces effectually the redundancy of the learned subspace representations, and improves the accuracy of clustering as well. The inference process of the affinity matrix for clustering is formulated as a Tensor nuclear norm minimization problem, constrained with an additional L2,1-norm regularizer and some linear equalities. The minimization problem is convex and thus can be solved efficiently by an Augmented Lagrangian Alternating Direction Minimization (AL-ADM) method. Extensive experimental results on four benchmark datasets show the effectiveness of our proposed LT-MSC method.

Ce Zhu - One of the best experts on this subject based on the ideXlab platform.

  • Bayesian Low Rank Tensor Ring Model for Image Completion.
    arXiv: Machine Learning, 2020
    Co-Authors: Zhen Long, Ce Zhu, Jiani Liu, Yipeng Liu
    Abstract:

    Low Rank Tensor ring model is powerful for image completion which recovers missing entries in data acquisition and transformation. The recently proposed Tensor ring (TR) based completion algorithms generally solve the low Rank optimization problem by alternating least squares method with predefined Ranks, which may easily lead to overfitting when the unknown Ranks are set too large and only a few measurements are available. In this paper, we present a Bayesian low Rank Tensor ring model for image completion by automatically learning the low Rank structure of data. A multiplicative interaction model is developed for the low-Rank Tensor ring decomposition, where core factors are enforced to be sparse by assuming their entries obey Student-T distribution. Compared with most of the existing methods, the proposed one is free of parameter-tuning, and the TR Ranks can be obtained by Bayesian inference. Numerical Experiments, including synthetic data, color images with different sizes and YaleFace dataset B with respect to one pose, show that the proposed approach outperforms state-of-the-art ones, especially in terms of recovery accuracy.

  • Robust Low-Rank Tensor Ring Completion
    'Institute of Electrical and Electronics Engineers (IEEE)', 2020
    Co-Authors: Huang Huyan, Liu Yipeng, Ce Zhu
    Abstract:

    Low-Rank Tensor completion recovers missing entries based on different Tensor decompositions. Due to its outstanding performance in exploiting some higher-order data structure, low Rank Tensor ring has been applied in Tensor completion. To further deal with its sensitivity to sparse component as it does in Tensor principle component analysis, we propose robust Tensor ring completion (RTRC), which separates latent low-Rank Tensor component from sparse component with limited number of measurements. The low Rank Tensor component is constrained by the weighted sum of nuclear norms of its balanced unfoldings, while the sparse component is regularized by its l1 norm. We analyze the RTRC model and gives the exact recovery guarantee. The alternating direction method of multipliers is used to divide the problem into several sub-problems with fast solutions. In numerical experiments, we verify the recovery condition of the proposed method on synthetic data, and show the proposed method outperforms the state-of-the-art ones in terms of both accuracy and computational complexity in a number of real-world data based tasks, i.e., light-field image recovery, shadow removal in face images, and background extraction in color video

  • Robust Low-Rank Tensor Ring Completion
    IEEE Transactions on Computational Imaging, 2020
    Co-Authors: Huyan Huang, Yipeng Liu, Zhen Long, Ce Zhu
    Abstract:

    Low-Rank Tensor completion recovers missing entries based on different Tensor decompositions. Due to its outstanding performance in exploiting some higher-order data structure, low Rank Tensor ring has been applied in Tensor completion. To further deal with its sensitivity to sparse component as it does in Tensor principle component analysis, we propose robust Tensor ring completion (RTRC), which separates latent low-Rank Tensor component from sparse component with limited number of measurements. The low Rank Tensor component is constrained by the weighted sum of nuclear norms of its balanced unfoldings, while the sparse component is regularized by its $\ell _1$ norm. We analyze the RTRC model and gives the exact recovery guarantee. The alternating direction method of multipliers is used to divide the problem into several sub-problems with fast solutions. In numerical experiments, we verify the recovery condition of the proposed method on synthetic data, and show the proposed method outperforms the state-of-the-art ones in terms of both accuracy and computational complexity in a number of real-world data based tasks, i.e., light-field image recovery, shadow removal in face images, and background extraction in color video.

  • low Rank Tensor completion for multiway visual data
    Signal Processing, 2019
    Co-Authors: Zhen Long, Yipeng Liu, Longxi Chen, Ce Zhu
    Abstract:

    Abstract Tensor completion recovers missing entries of multiway data. The missing of entries could often be caused during the data acquisition and transformation. In this paper, we provide an overview of recent development in low-Rank Tensor completion for estimating the missing components of visual data, e.g. , color images and videos. First, we categorize these methods into two groups based on the different optimization models. One optimizes factors of Tensor decompositions with predefined Tensor Rank. The other iteratively updates the estimated Tensor via minimizing the Tensor Rank. Besides, we summarize the corresponding algorithms to solve those optimization problems in details. Numerical experiments are given to demonstrate the performance comparison when different methods are applied to color image and video processing.

Changqing Zhang - One of the best experts on this subject based on the ideXlab platform.

  • low Rank Tensor constrained multiview subspace clustering
    International Conference on Computer Vision, 2015
    Co-Authors: Changqing Zhang, Si Liu, Guangcan Liu, Xiaochun Cao
    Abstract:

    In this paper, we explore the problem of multiview subspace clustering. We introduce a low-Rank Tensor constraint to explore the complementary information from multiple views and, accordingly, establish a novel method called Low-Rank Tensor constrained Multiview Subspace Clustering (LT-MSC). Our method regards the subspace representation matrices of different views as a Tensor, which captures dexterously the high order correlations underlying multiview data. Then the Tensor is equipped with a low-Rank constraint, which models elegantly the cross information among different views, reduces effectually the redundancy of the learned subspace representations, and improves the accuracy of clustering as well. The inference process of the affinity matrix for clustering is formulated as a Tensor nuclear norm minimization problem, constrained with an additional L2,1-norm regularizer and some linear equalities. The minimization problem is convex and thus can be solved efficiently by an Augmented Lagrangian Alternating Direction Minimization (AL-ADM) method. Extensive experimental results on four benchmark datasets show the effectiveness of our proposed LT-MSC method.

  • ICCV - Low-Rank Tensor Constrained Multiview Subspace Clustering
    2015 IEEE International Conference on Computer Vision (ICCV), 2015
    Co-Authors: Changqing Zhang, Si Liu, Guangcan Liu, Xiaochun Cao
    Abstract:

    In this paper, we explore the problem of multiview subspace clustering. We introduce a low-Rank Tensor constraint to explore the complementary information from multiple views and, accordingly, establish a novel method called Low-Rank Tensor constrained Multiview Subspace Clustering (LT-MSC). Our method regards the subspace representation matrices of different views as a Tensor, which captures dexterously the high order correlations underlying multiview data. Then the Tensor is equipped with a low-Rank constraint, which models elegantly the cross information among different views, reduces effectually the redundancy of the learned subspace representations, and improves the accuracy of clustering as well. The inference process of the affinity matrix for clustering is formulated as a Tensor nuclear norm minimization problem, constrained with an additional L2,1-norm regularizer and some linear equalities. The minimization problem is convex and thus can be solved efficiently by an Augmented Lagrangian Alternating Direction Minimization (AL-ADM) method. Extensive experimental results on four benchmark datasets show the effectiveness of our proposed LT-MSC method.

Zhouchen Lin - One of the best experts on this subject based on the ideXlab platform.

  • Unified Graph and Low-Rank Tensor Learning for Multi-View Clustering
    Proceedings of the AAAI Conference on Artificial Intelligence, 2020
    Co-Authors: Xingyu Xie, Liqiang Nie, Zhouchen Lin, Hongbin Zha
    Abstract:

    Multi-view clustering aims to take advantage of multiple views information to improve the performance of clustering. Many existing methods compute the affinity matrix by low-Rank representation (LRR) and pairwise investigate the relationship between views. However, LRR suffers from the high computational cost in self-representation optimization. Besides, compared with pairwise views, Tensor form of all views' representation is more suitable for capturing the high-order correlations among all views. Towards these two issues, in this paper, we propose the unified graph and low-Rank Tensor learning (UGLTL) for multi-view clustering. Specifically, on the one hand, we learn the view-specific affinity matrix based on projected graph learning. On the other hand, we reorganize the affinity matrices into Tensor form and learn its intrinsic Tensor based on low-Rank Tensor approximation. Finally, we unify these two terms together and jointly learn the optimal projection matrices, affinity matrices and intrinsic low-Rank Tensor. We also propose an efficient algorithm to iteratively optimize the proposed model. To evaluate the performance of the proposed method, we conduct extensive experiments on multiple benchmarks across different scenarios and sizes. Compared with the state-of-the-art approaches, our method achieves much better performance.

  • exact low tubal Rank Tensor recovery from gaussian measurements
    arXiv: Machine Learning, 2018
    Co-Authors: Jiashi Feng, Zhouchen Lin, Shuicheng Yan
    Abstract:

    The recent proposed Tensor Nuclear Norm (TNN) [Lu et al., 2016; 2018a] is an interesting convex penalty induced by the Tensor SVD [Kilmer and Martin, 2011]. It plays a similar role as the matrix nuclear norm which is the convex surrogate of the matrix Rank. Considering that the TNN based Tensor Robust PCA [Lu et al., 2018a] is an elegant extension of Robust PCA with a similar tight recovery bound, it is natural to solve other low Rank Tensor recovery problems extended from the matrix cases. However, the extensions and proofs are generally tedious. The general atomic norm provides a unified view of low-complexity structures induced norms, e.g., the $\ell_1$-norm and nuclear norm. The sharp estimates of the required number of generic measurements for exact recovery based on the atomic norm are known in the literature. In this work, with a careful choice of the atomic set, we prove that TNN is a special atomic norm. Then by computing the Gaussian width of certain cone which is necessary for the sharp estimate, we achieve a simple bound for guaranteed low tubal Rank Tensor recovery from Gaussian measurements. Specifically, we show that by solving a TNN minimization problem, the underlying Tensor of size $n_1\times n_2\times n_3$ with tubal Rank $r$ can be exactly recovered when the given number of Gaussian measurements is $O(r(n_1+n_2-r)n_3)$. It is order optimal when comparing with the degrees of freedom $r(n_1+n_2-r)n_3$. Beyond the Gaussian mapping, we also give the recovery guarantee of Tensor completion based on the uniform random mapping by TNN minimization. Numerical experiments verify our theoretical results.

  • Tensor factorization for low Rank Tensor completion
    IEEE Transactions on Image Processing, 2018
    Co-Authors: Pan Zhou, Zhouchen Lin, Chao Zhang
    Abstract:

    Recently, a Tensor nuclear norm (TNN) based method was proposed to solve the Tensor completion problem, which has achieved state-of-the-art performance on image and video inpainting tasks. However, it requires computing Tensor singular value decomposition (t-SVD), which costs much computation and thus cannot efficiently handle Tensor data, due to its natural large scale. Motivated by TNN, we propose a novel low-Rank Tensor factorization method for efficiently solving the 3-way Tensor completion problem. Our method preserves the low-Rank structure of a Tensor by factorizing it into the product of two Tensors of smaller sizes. In the optimization process, our method only needs to update two smaller Tensors, which can be more efficiently conducted than computing t-SVD. Furthermore, we prove that the proposed alternating minimization algorithm can converge to a Karush–Kuhn–Tucker point. Experimental results on the synthetic data recovery, image and video inpainting tasks clearly demonstrate the superior performance and efficiency of our developed method over state-of-the-arts including the TNN and matricization methods.

Yan Liu - One of the best experts on this subject based on the ideXlab platform.

  • CAMSAP - Low-Rank Tensor regression: Scalability and applications
    2017 IEEE 7th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), 2017
    Co-Authors: Yan Liu
    Abstract:

    With the development of sensor and satellite technologies, massive amount of multiway data emerges in many applications. Low-Rank Tensor regression, as a powerful technique for analyzing Tensor data, attracted significant interest from the machine learning community. In this paper, we discuss a series of fast algorithms for solving low-Rank Tensor regression in different learning scenarios, including (a) a greedy algorithm for batch learning; (b) Accelerated Low-Rank Tensor Online Learning (ALTO) algorithm for online learning; (c) subsampled Tensor projected gradient for memory efficient learning.

  • accelerated online low Rank Tensor learning for multivariate spatiotemporal streams
    International Conference on Machine Learning, 2015
    Co-Authors: Dehua Cheng, Yan Liu
    Abstract:

    Low-Rank Tensor learning has many applications in machine learning. A series of batch learning algorithms have achieved great successes. However, in many emerging applications, such as climate data analysis, we are confronted with largescale Tensor streams, which pose significant challenges to existing solutions. In this paper, we propose an accelerated online low-Rank Tensor learning algorithm (ALTO) to solve the problem. At each iteration, we project the current Tensor to a low-dimensional Tensor, using the information of the previous low-Rank Tensor, in order to perform efficient Tensor decomposition, and then recover the low-Rank approximation of the current Tensor. By randomly selecting additional subspaces, we successfully overcome the issue of local optima at an extremely low computational cost. We evaluate our method on two tasks in online multivariate spatio-temporal analysis: online forecasting and multi-model ensemble. Experiment results show that our method achieves comparable predictive accuracy with significant speed-up.

  • ICML - Accelerated Online Low Rank Tensor Learning for Multivariate Spatiotemporal Streams
    2015
    Co-Authors: Dehua Cheng, Yan Liu
    Abstract:

    Low-Rank Tensor learning has many applications in machine learning. A series of batch learning algorithms have achieved great successes. However, in many emerging applications, such as climate data analysis, we are confronted with largescale Tensor streams, which pose significant challenges to existing solutions. In this paper, we propose an accelerated online low-Rank Tensor learning algorithm (ALTO) to solve the problem. At each iteration, we project the current Tensor to a low-dimensional Tensor, using the information of the previous low-Rank Tensor, in order to perform efficient Tensor decomposition, and then recover the low-Rank approximation of the current Tensor. By randomly selecting additional subspaces, we successfully overcome the issue of local optima at an extremely low computational cost. We evaluate our method on two tasks in online multivariate spatio-temporal analysis: online forecasting and multi-model ensemble. Experiment results show that our method achieves comparable predictive accuracy with significant speed-up.

  • Fast Multivariate Spatio-temporal Analysis via Low Rank Tensor Learning
    Nips, 2014
    Co-Authors: Mohammad Taha Bahadori, Rose Yu, Yan Liu, Los Angeles
    Abstract:

    Accurate and efficient analysis of multivariate spatio-temporal data is critical in climatology, geology, and sociology applications. Existing models usually assume simple inter-dependence among variables, space, and time, and are computation-ally expensive. We propose a unified low Rank Tensor learning framework for mul-tivariate spatio-temporal analysis, which can conveniently incorporate different properties in spatio-temporal data, such as spatial clustering and shared structure among variables. We demonstrate how the general framework can be applied to cokriging and forecasting tasks, and develop an efficient greedy algorithm to solve the resulting optimization problem with convergence guarantee. We conduct ex-periments on both synthetic datasets and real application datasets to demonstrate that our method is not only significantly faster than existing methods but also achieves lower estimation error.