Image Alignment

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 31479 Experts worldwide ranked by ideXlab platform

Zhouchen Lin - One of the best experts on this subject based on the ideXlab platform.

  • Bilinear Factor Matrix Norm Minimization for Robust PCA: Algorithms and Applications
    arXiv: Learning, 2018
    Co-Authors: Fanhua Shang, James Cheng, Yuanyuan Liu, Zhi-quan Luo, Zhouchen Lin
    Abstract:

    The heavy-tailed distributions of corrupted outliers and singular values of all channels in low-level vision have proven effective priors for many applications such as background modeling, photometric stereo and Image Alignment. And they can be well modeled by a hyper-Laplacian. However, the use of such distributions generally leads to challenging non-convex, non-smooth and non-Lipschitz problems, and makes existing algorithms very slow for large-scale applications. Together with the analytic solutions to lp-norm minimization with two specific values of p, i.e., p=1/2 and p=2/3, we propose two novel bilinear factor matrix norm minimization models for robust principal component analysis. We first define the double nuclear norm and Frobenius/nuclear hybrid norm penalties, and then prove that they are in essence the Schatten-1/2 and 2/3 quasi-norms, respectively, which lead to much more tractable and scalable Lipschitz optimization problems. Our experimental analysis shows that both our methods yield more accurate solutions than original Schatten quasi-norm minimization, even when the number of observations is very limited. Finally, we apply our penalties to various low-level vision problems, e.g., text removal, moving object detection, Image Alignment and inpainting, and show that our methods usually outperform the state-of-the-art methods.

  • Bilinear Factor Matrix Norm Minimization for Robust PCA: Algorithms and Applications
    IEEE transactions on pattern analysis and machine intelligence, 2017
    Co-Authors: Fanhua Shang, James Cheng, Yuanyuan Liu, Zhi-quan Luo, Zhouchen Lin
    Abstract:

    The heavy-tailed distributions of corrupted outliers and singular values of all channels in low-level vision have proven effective priors for many applications such as background modeling, photometric stereo and Image Alignment. And they can be well modeled by a hyper-Laplacian. However, the use of such distributions generally leads to challenging non-convex, non-smooth and non-Lipschitz problems, and makes existing algorithms very slow for large-scale applications. Together with the analytic solutions to $\ell _{p}$ -norm minimization with two specific values of $p$ , i.e., $p=1/2$ and $p=2/3$ , we propose two novel bilinear factor matrix norm minimization models for robust principal component analysis. We first define the double nuclear norm and Frobenius/nuclear hybrid norm penalties, and then prove that they are in essence the Schatten- $1/2$ and $2/3$ quasi-norms, respectively, which lead to much more tractable and scalable Lipschitz optimization problems. Our experimental analysis shows that both our methods yield more accurate solutions than original Schatten quasi-norm minimization, even when the number of observations is very limited. Finally, we apply our penalties to various low-level vision problems, e.g., text removal, moving object detection, Image Alignment and inpainting, and show that our methods usually outperform the state-of-the-art methods.

Fanhua Shang - One of the best experts on this subject based on the ideXlab platform.

  • Bilinear Factor Matrix Norm Minimization for Robust PCA: Algorithms and Applications
    arXiv: Learning, 2018
    Co-Authors: Fanhua Shang, James Cheng, Yuanyuan Liu, Zhi-quan Luo, Zhouchen Lin
    Abstract:

    The heavy-tailed distributions of corrupted outliers and singular values of all channels in low-level vision have proven effective priors for many applications such as background modeling, photometric stereo and Image Alignment. And they can be well modeled by a hyper-Laplacian. However, the use of such distributions generally leads to challenging non-convex, non-smooth and non-Lipschitz problems, and makes existing algorithms very slow for large-scale applications. Together with the analytic solutions to lp-norm minimization with two specific values of p, i.e., p=1/2 and p=2/3, we propose two novel bilinear factor matrix norm minimization models for robust principal component analysis. We first define the double nuclear norm and Frobenius/nuclear hybrid norm penalties, and then prove that they are in essence the Schatten-1/2 and 2/3 quasi-norms, respectively, which lead to much more tractable and scalable Lipschitz optimization problems. Our experimental analysis shows that both our methods yield more accurate solutions than original Schatten quasi-norm minimization, even when the number of observations is very limited. Finally, we apply our penalties to various low-level vision problems, e.g., text removal, moving object detection, Image Alignment and inpainting, and show that our methods usually outperform the state-of-the-art methods.

  • Bilinear Factor Matrix Norm Minimization for Robust PCA: Algorithms and Applications
    IEEE transactions on pattern analysis and machine intelligence, 2017
    Co-Authors: Fanhua Shang, James Cheng, Yuanyuan Liu, Zhi-quan Luo, Zhouchen Lin
    Abstract:

    The heavy-tailed distributions of corrupted outliers and singular values of all channels in low-level vision have proven effective priors for many applications such as background modeling, photometric stereo and Image Alignment. And they can be well modeled by a hyper-Laplacian. However, the use of such distributions generally leads to challenging non-convex, non-smooth and non-Lipschitz problems, and makes existing algorithms very slow for large-scale applications. Together with the analytic solutions to $\ell _{p}$ -norm minimization with two specific values of $p$ , i.e., $p=1/2$ and $p=2/3$ , we propose two novel bilinear factor matrix norm minimization models for robust principal component analysis. We first define the double nuclear norm and Frobenius/nuclear hybrid norm penalties, and then prove that they are in essence the Schatten- $1/2$ and $2/3$ quasi-norms, respectively, which lead to much more tractable and scalable Lipschitz optimization problems. Our experimental analysis shows that both our methods yield more accurate solutions than original Schatten quasi-norm minimization, even when the number of observations is very limited. Finally, we apply our penalties to various low-level vision problems, e.g., text removal, moving object detection, Image Alignment and inpainting, and show that our methods usually outperform the state-of-the-art methods.

Sebastiano Battiato - One of the best experts on this subject based on the ideXlab platform.

  • a forensic signature based on spatial distributed bag of features for Image Alignment and tampering detection
    Proceedings of the 3rd international ACM workshop on Multimedia in forensics and intelligence, 2011
    Co-Authors: Sebastiano Battiato, Giovanni Maria Farinella, Enrico Messina, Giovanni Puglisi
    Abstract:

    The distribution of digital Images with the classic and newest technologies available on Internet has induced a growing interest on systems able to protect the visual content against malicious manipulations performed during their transmission. One of the main problems addressed in this context is the authentication of the Image received in a communication. This task is usually performed by localizing the regions of the Image which have been tampered. To this aim the received Image should be first aligned with the one at the sender by exploiting the information provided by a specific component of the forensic hash associated with the Image. In this paper we propose a robust Alignment method which makes use of an Image hash component based on the Bag of Features paradigm. In order to deal with highly textured and contrasted tampering patterns the spatial distribution of the Image features has been encoded in the Alignment signature. A block-wise tampering detection based on histograms of oriented gradients representation is also proposed. Specifically, a non-uniform quantization of the histogram of oriented gradient space is used to build the signature of each Image block for tampering purposes. Experiments show that the proposed approach obtains good margin in terms of performances with respect to state-of-the-art methods.

  • a robust Image Alignment algorithm for video stabilization purposes
    IEEE Transactions on Circuits and Systems for Video Technology, 2011
    Co-Authors: Giovanni Puglisi, Sebastiano Battiato
    Abstract:

    Today, many people in the world without any (or with little) knowledge about video recording, thanks to the widespread use of mobile devices (personal digital assistants, mobile phones, etc.), take videos. However, the unwanted movements of their hands typically blur and introduce disturbing jerkiness in the recorded sequences. Many video stabilization techniques have been hence developed with different performances but only fast strategies can be implemented on embedded devices. A fundamental issue is the overall robustness with respect to different scene contents (indoor, outdoor, etc.) and conditions (illumination changes, moving objects, etc.). In this paper, we propose a fast and robust Image Alignment algorithm for video stabilization purposes. Our contribution is twofold: a fast and accurate block-based local motion estimator together with a robust Alignment algorithm based on voting. Experimental results confirm the effectiveness of both local and global motion estimators.

  • a robust Image Alignment algorithm for video stabilization purposes
    IEEE Transactions on Circuits and Systems for Video Technology, 2011
    Co-Authors: Giovanni Puglisi, Sebastiano Battiato
    Abstract:

    Today, many people in the world without any (or with little) knowledge about video recording, thanks to the widespread use of mobile devices (personal digital assistants, mobile phones, etc.), take videos. However, the unwanted movements of their hands typically blur and introduce disturbing jerkiness in the recorded sequences. Many video stabilization techniques have been hence developed with different performances but only fast strategies can be implemented on embedded devices. A fundamental issue is the overall robustness with respect to different scene contents (indoor, outdoor, etc.) and conditions (illumination changes, moving objects, etc.). In this paper, we propose a fast and robust Image Alignment algorithm for video stabilization purposes. Our contribution is twofold: a fast and accurate block-based local motion estimator together with a robust Alignment algorithm based on voting. Experimental results confirm the effectiveness of both local and global motion estimators.

Bo Huang - One of the best experts on this subject based on the ideXlab platform.

  • deformed Alignment of super resolution Images for semi flexible structures in 3d
    bioRxiv, 2018
    Co-Authors: Xiaoyu Shi, Yina Wang, Galo Garcia, Jeremy F Reiter, Bo Huang
    Abstract:

    Due to low labeling efficiency and structural heterogeneity in fluorescence-based single-molecule localization microscopy (SMLM), Image Alignment and quantitative analysis is often required to make accurate conclusions on the spatial relationships between proteins. Cryo-electron microscopy (EM) Image Alignment procedures have been applied to average structures taken with super-resolution microscopy. However, unlike cryo-EM, the much larger cellular structures analyzed by super-resolution microscopy are often heterogeneous, resulting in misAlignment. And the light-microscopy Image library is much smaller, which makes classification not realistic. To overcome these two challenges, we developed a method to deform semi-flexible ring-shaped structures and then align the 3D structures without classification. These algorithms can register semi-flexible structures with an accuracy of several nanometers in short computation time and with greatly reduced memory requirements. We demonstrated our methods by aligning experimental Stochastic Optical Reconstruction Microscopy (STORM) Images of ciliary distal appendages and simulated structures. Symmetries, dimensions, and locations of protein complexes in 3D are revealed by the Alignment and averaging for heterogeneous, tilted, and under-labeled structures.

  • correlation analysis framework for localization based superresolution microscopy
    Proceedings of the National Academy of Sciences of the United States of America, 2018
    Co-Authors: Joerg Schnitzbauer, Yina Wang, Matthew H Bakalar, Baohui Chen, Tulip Nuwal, Shijie Zhao, Bo Huang
    Abstract:

    Superresolution Images reconstructed from single-molecule localizations can reveal cellular structures close to the macromolecular scale and are now being used routinely in many biomedical research applications. However, because of their coordinate-based representation, a widely applicable and unified analysis platform that can extract a quantitative description and biophysical parameters from these Images is yet to be established. Here, we propose a conceptual framework for correlation analysis of coordinate-based superresolution Images using distance histograms. We demonstrate the application of this concept in multiple scenarios, including Image Alignment, tracking of diffusing molecules, as well as for quantification of colocalization, showing its superior performance over existing approaches.

  • a correlation analysis framework for localization based super resolution microscopy
    bioRxiv, 2017
    Co-Authors: Joerg Schnitzbauer, Yina Wang, Matthew H Bakalar, Baohui Chen, Tulip Nuwal, Shijie Zhao, Bo Huang
    Abstract:

    Super-resolution Images reconstructed from single-molecule localizations can reveal cellular structures close to the macromolecular scale and are now being used routinely in many biomedical research applications. However, because of their coordinate-based representation, a widely applicable and unified analysis platform that can extract a quantitative description and biophysical parameters from these Images is yet to be established. Here, we propose a conceptual framework for correlation analysis of coordinate-based super-resolution Images using distance histograms. We demonstrate the application of this concept in multiple scenarios including Image Alignment, tracking of diffusing molecules, as well as for quantification of colocalization.

Yuanyuan Liu - One of the best experts on this subject based on the ideXlab platform.

  • Bilinear Factor Matrix Norm Minimization for Robust PCA: Algorithms and Applications
    arXiv: Learning, 2018
    Co-Authors: Fanhua Shang, James Cheng, Yuanyuan Liu, Zhi-quan Luo, Zhouchen Lin
    Abstract:

    The heavy-tailed distributions of corrupted outliers and singular values of all channels in low-level vision have proven effective priors for many applications such as background modeling, photometric stereo and Image Alignment. And they can be well modeled by a hyper-Laplacian. However, the use of such distributions generally leads to challenging non-convex, non-smooth and non-Lipschitz problems, and makes existing algorithms very slow for large-scale applications. Together with the analytic solutions to lp-norm minimization with two specific values of p, i.e., p=1/2 and p=2/3, we propose two novel bilinear factor matrix norm minimization models for robust principal component analysis. We first define the double nuclear norm and Frobenius/nuclear hybrid norm penalties, and then prove that they are in essence the Schatten-1/2 and 2/3 quasi-norms, respectively, which lead to much more tractable and scalable Lipschitz optimization problems. Our experimental analysis shows that both our methods yield more accurate solutions than original Schatten quasi-norm minimization, even when the number of observations is very limited. Finally, we apply our penalties to various low-level vision problems, e.g., text removal, moving object detection, Image Alignment and inpainting, and show that our methods usually outperform the state-of-the-art methods.

  • Bilinear Factor Matrix Norm Minimization for Robust PCA: Algorithms and Applications
    IEEE transactions on pattern analysis and machine intelligence, 2017
    Co-Authors: Fanhua Shang, James Cheng, Yuanyuan Liu, Zhi-quan Luo, Zhouchen Lin
    Abstract:

    The heavy-tailed distributions of corrupted outliers and singular values of all channels in low-level vision have proven effective priors for many applications such as background modeling, photometric stereo and Image Alignment. And they can be well modeled by a hyper-Laplacian. However, the use of such distributions generally leads to challenging non-convex, non-smooth and non-Lipschitz problems, and makes existing algorithms very slow for large-scale applications. Together with the analytic solutions to $\ell _{p}$ -norm minimization with two specific values of $p$ , i.e., $p=1/2$ and $p=2/3$ , we propose two novel bilinear factor matrix norm minimization models for robust principal component analysis. We first define the double nuclear norm and Frobenius/nuclear hybrid norm penalties, and then prove that they are in essence the Schatten- $1/2$ and $2/3$ quasi-norms, respectively, which lead to much more tractable and scalable Lipschitz optimization problems. Our experimental analysis shows that both our methods yield more accurate solutions than original Schatten quasi-norm minimization, even when the number of observations is very limited. Finally, we apply our penalties to various low-level vision problems, e.g., text removal, moving object detection, Image Alignment and inpainting, and show that our methods usually outperform the state-of-the-art methods.