One-to-One Mapping

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 1349778 Experts worldwide ranked by ideXlab platform

Xuqi Liu - One of the best experts on this subject based on the ideXlab platform.

  • WACV - One-to-One Mapping for Unpaired Image-to-image Translation
    2020 IEEE Winter Conference on Applications of Computer Vision (WACV), 2020
    Co-Authors: Zengming Shen, S. Kevin Zhou, Yifan Chen, Bogdan Georgescu, Thomas S. Huang, Xuqi Liu
    Abstract:

    Recently image-to-image translation has attracted significant interests in the literature, starting from the successful use of the generative adversarial network (GAN), to the introduction of cyclic constraint, to extensions to multiple domains. However, in existing approaches, there is no guarantee that the Mapping between two image domains is unique or One-to-One. Here we propose a self-inverse network learning approach for unpaired image-to-image translation. Building on top of CycleGAN, we learn a self-inverse function by simply augmenting the training samples by swapping inputs and outputs during training and with separated cycle consistency loss for each Mapping direction. The outcome of such learning is a proven One-to-One Mapping function. Our extensive experiments on a variety of datasets, including cross-modal medical image synthesis, object transfiguration, and semantic labeling, consistently demonstrate clear improvement over the CycleGAN method both qualitatively and quantitatively. Especially our proposed method reaches the state-of-the-art result on the cityscapes benchmark dataset for the label to photo un-paired directional image translation.

  • One-to-One Mapping for Unpaired Image-to-image Translation
    arXiv: Computer Vision and Pattern Recognition, 2019
    Co-Authors: Zengming Shen, S. Kevin Zhou, Yifan Chen, Bogdan Georgescu, Xuqi Liu, Thomas Huang
    Abstract:

    Recently image-to-image translation has attracted significant interests in the literature, starting from the successful use of the generative adversarial network (GAN), to the introduction of cyclic constraint, to extensions to multiple domains. However, in existing approaches, there is no guarantee that the Mapping between two image domains is unique or One-to-One. Here we propose a self-inverse network learning approach for unpaired image-to-image translation. Building on top of CycleGAN, we learn a self-inverse function by simply augmenting the training samples by swapping inputs and outputs during training and with separated cycle consistency loss for each Mapping direction. The outcome of such learning is a proven One-to-One Mapping function. Our extensive experiments on a variety of datasets, including cross-modal medical image synthesis, object transfiguration, and semantic labeling, consistently demonstrate clear improvement over the CycleGAN method both qualitatively and quantitatively. Especially our proposed method reaches the state-of-the-art result on the cityscapes benchmark dataset for the label to photo unpaired directional image translation.

Zengming Shen - One of the best experts on this subject based on the ideXlab platform.

  • WACV - One-to-One Mapping for Unpaired Image-to-image Translation
    2020 IEEE Winter Conference on Applications of Computer Vision (WACV), 2020
    Co-Authors: Zengming Shen, S. Kevin Zhou, Yifan Chen, Bogdan Georgescu, Thomas S. Huang, Xuqi Liu
    Abstract:

    Recently image-to-image translation has attracted significant interests in the literature, starting from the successful use of the generative adversarial network (GAN), to the introduction of cyclic constraint, to extensions to multiple domains. However, in existing approaches, there is no guarantee that the Mapping between two image domains is unique or One-to-One. Here we propose a self-inverse network learning approach for unpaired image-to-image translation. Building on top of CycleGAN, we learn a self-inverse function by simply augmenting the training samples by swapping inputs and outputs during training and with separated cycle consistency loss for each Mapping direction. The outcome of such learning is a proven One-to-One Mapping function. Our extensive experiments on a variety of datasets, including cross-modal medical image synthesis, object transfiguration, and semantic labeling, consistently demonstrate clear improvement over the CycleGAN method both qualitatively and quantitatively. Especially our proposed method reaches the state-of-the-art result on the cityscapes benchmark dataset for the label to photo un-paired directional image translation.

  • One-to-One Mapping for Unpaired Image-to-image Translation
    arXiv: Computer Vision and Pattern Recognition, 2019
    Co-Authors: Zengming Shen, S. Kevin Zhou, Yifan Chen, Bogdan Georgescu, Xuqi Liu, Thomas Huang
    Abstract:

    Recently image-to-image translation has attracted significant interests in the literature, starting from the successful use of the generative adversarial network (GAN), to the introduction of cyclic constraint, to extensions to multiple domains. However, in existing approaches, there is no guarantee that the Mapping between two image domains is unique or One-to-One. Here we propose a self-inverse network learning approach for unpaired image-to-image translation. Building on top of CycleGAN, we learn a self-inverse function by simply augmenting the training samples by swapping inputs and outputs during training and with separated cycle consistency loss for each Mapping direction. The outcome of such learning is a proven One-to-One Mapping function. Our extensive experiments on a variety of datasets, including cross-modal medical image synthesis, object transfiguration, and semantic labeling, consistently demonstrate clear improvement over the CycleGAN method both qualitatively and quantitatively. Especially our proposed method reaches the state-of-the-art result on the cityscapes benchmark dataset for the label to photo unpaired directional image translation.

  • towards learning a self inverse network for bidirectional image to image translation
    arXiv: Computer Vision and Pattern Recognition, 2019
    Co-Authors: Zengming Shen, Yifan Chen, Bogdan Georgescu, Shaohua Kevin Zhou, Thomas S. Huang
    Abstract:

    The One-to-One Mapping is necessary for many bidirectional image-to-image translation applications, such as MRI image synthesis as MRI images are unique to the patient. State-of-the-art approaches for image synthesis from domain X to domain Y learn a convolutional neural network that meticulously maps between the domains. A different network is typically implemented to map along the opposite direction, from Y to X. In this paper, we explore the possibility of only wielding one network for bi-directional image synthesis. In other words, such an autonomous learning network implements a self-inverse function. A self-inverse network shares several distinct advantages: only one network instead of two, better generalization and more restricted parameter space. Most importantly, a self-inverse function guarantees a One-to-One Mapping, a property that cannot be guaranteed by earlier approaches that are not self-inverse. The experiments on three datasets show that, compared with the baseline approaches that use two separate models for the image synthesis along two directions, our self-inverse network achieves better synthesis results in terms of standard metrics. Finally, our sensitivity analysis confirms the feasibility of learning a self-inverse function for the bidirectional image translation.

Bogdan Georgescu - One of the best experts on this subject based on the ideXlab platform.

  • WACV - One-to-One Mapping for Unpaired Image-to-image Translation
    2020 IEEE Winter Conference on Applications of Computer Vision (WACV), 2020
    Co-Authors: Zengming Shen, S. Kevin Zhou, Yifan Chen, Bogdan Georgescu, Thomas S. Huang, Xuqi Liu
    Abstract:

    Recently image-to-image translation has attracted significant interests in the literature, starting from the successful use of the generative adversarial network (GAN), to the introduction of cyclic constraint, to extensions to multiple domains. However, in existing approaches, there is no guarantee that the Mapping between two image domains is unique or One-to-One. Here we propose a self-inverse network learning approach for unpaired image-to-image translation. Building on top of CycleGAN, we learn a self-inverse function by simply augmenting the training samples by swapping inputs and outputs during training and with separated cycle consistency loss for each Mapping direction. The outcome of such learning is a proven One-to-One Mapping function. Our extensive experiments on a variety of datasets, including cross-modal medical image synthesis, object transfiguration, and semantic labeling, consistently demonstrate clear improvement over the CycleGAN method both qualitatively and quantitatively. Especially our proposed method reaches the state-of-the-art result on the cityscapes benchmark dataset for the label to photo un-paired directional image translation.

  • One-to-One Mapping for Unpaired Image-to-image Translation
    arXiv: Computer Vision and Pattern Recognition, 2019
    Co-Authors: Zengming Shen, S. Kevin Zhou, Yifan Chen, Bogdan Georgescu, Xuqi Liu, Thomas Huang
    Abstract:

    Recently image-to-image translation has attracted significant interests in the literature, starting from the successful use of the generative adversarial network (GAN), to the introduction of cyclic constraint, to extensions to multiple domains. However, in existing approaches, there is no guarantee that the Mapping between two image domains is unique or One-to-One. Here we propose a self-inverse network learning approach for unpaired image-to-image translation. Building on top of CycleGAN, we learn a self-inverse function by simply augmenting the training samples by swapping inputs and outputs during training and with separated cycle consistency loss for each Mapping direction. The outcome of such learning is a proven One-to-One Mapping function. Our extensive experiments on a variety of datasets, including cross-modal medical image synthesis, object transfiguration, and semantic labeling, consistently demonstrate clear improvement over the CycleGAN method both qualitatively and quantitatively. Especially our proposed method reaches the state-of-the-art result on the cityscapes benchmark dataset for the label to photo unpaired directional image translation.

  • towards learning a self inverse network for bidirectional image to image translation
    arXiv: Computer Vision and Pattern Recognition, 2019
    Co-Authors: Zengming Shen, Yifan Chen, Bogdan Georgescu, Shaohua Kevin Zhou, Thomas S. Huang
    Abstract:

    The One-to-One Mapping is necessary for many bidirectional image-to-image translation applications, such as MRI image synthesis as MRI images are unique to the patient. State-of-the-art approaches for image synthesis from domain X to domain Y learn a convolutional neural network that meticulously maps between the domains. A different network is typically implemented to map along the opposite direction, from Y to X. In this paper, we explore the possibility of only wielding one network for bi-directional image synthesis. In other words, such an autonomous learning network implements a self-inverse function. A self-inverse network shares several distinct advantages: only one network instead of two, better generalization and more restricted parameter space. Most importantly, a self-inverse function guarantees a One-to-One Mapping, a property that cannot be guaranteed by earlier approaches that are not self-inverse. The experiments on three datasets show that, compared with the baseline approaches that use two separate models for the image synthesis along two directions, our self-inverse network achieves better synthesis results in terms of standard metrics. Finally, our sensitivity analysis confirms the feasibility of learning a self-inverse function for the bidirectional image translation.

Yifan Chen - One of the best experts on this subject based on the ideXlab platform.

  • WACV - One-to-One Mapping for Unpaired Image-to-image Translation
    2020 IEEE Winter Conference on Applications of Computer Vision (WACV), 2020
    Co-Authors: Zengming Shen, S. Kevin Zhou, Yifan Chen, Bogdan Georgescu, Thomas S. Huang, Xuqi Liu
    Abstract:

    Recently image-to-image translation has attracted significant interests in the literature, starting from the successful use of the generative adversarial network (GAN), to the introduction of cyclic constraint, to extensions to multiple domains. However, in existing approaches, there is no guarantee that the Mapping between two image domains is unique or One-to-One. Here we propose a self-inverse network learning approach for unpaired image-to-image translation. Building on top of CycleGAN, we learn a self-inverse function by simply augmenting the training samples by swapping inputs and outputs during training and with separated cycle consistency loss for each Mapping direction. The outcome of such learning is a proven One-to-One Mapping function. Our extensive experiments on a variety of datasets, including cross-modal medical image synthesis, object transfiguration, and semantic labeling, consistently demonstrate clear improvement over the CycleGAN method both qualitatively and quantitatively. Especially our proposed method reaches the state-of-the-art result on the cityscapes benchmark dataset for the label to photo un-paired directional image translation.

  • One-to-One Mapping for Unpaired Image-to-image Translation
    arXiv: Computer Vision and Pattern Recognition, 2019
    Co-Authors: Zengming Shen, S. Kevin Zhou, Yifan Chen, Bogdan Georgescu, Xuqi Liu, Thomas Huang
    Abstract:

    Recently image-to-image translation has attracted significant interests in the literature, starting from the successful use of the generative adversarial network (GAN), to the introduction of cyclic constraint, to extensions to multiple domains. However, in existing approaches, there is no guarantee that the Mapping between two image domains is unique or One-to-One. Here we propose a self-inverse network learning approach for unpaired image-to-image translation. Building on top of CycleGAN, we learn a self-inverse function by simply augmenting the training samples by swapping inputs and outputs during training and with separated cycle consistency loss for each Mapping direction. The outcome of such learning is a proven One-to-One Mapping function. Our extensive experiments on a variety of datasets, including cross-modal medical image synthesis, object transfiguration, and semantic labeling, consistently demonstrate clear improvement over the CycleGAN method both qualitatively and quantitatively. Especially our proposed method reaches the state-of-the-art result on the cityscapes benchmark dataset for the label to photo unpaired directional image translation.

  • towards learning a self inverse network for bidirectional image to image translation
    arXiv: Computer Vision and Pattern Recognition, 2019
    Co-Authors: Zengming Shen, Yifan Chen, Bogdan Georgescu, Shaohua Kevin Zhou, Thomas S. Huang
    Abstract:

    The One-to-One Mapping is necessary for many bidirectional image-to-image translation applications, such as MRI image synthesis as MRI images are unique to the patient. State-of-the-art approaches for image synthesis from domain X to domain Y learn a convolutional neural network that meticulously maps between the domains. A different network is typically implemented to map along the opposite direction, from Y to X. In this paper, we explore the possibility of only wielding one network for bi-directional image synthesis. In other words, such an autonomous learning network implements a self-inverse function. A self-inverse network shares several distinct advantages: only one network instead of two, better generalization and more restricted parameter space. Most importantly, a self-inverse function guarantees a One-to-One Mapping, a property that cannot be guaranteed by earlier approaches that are not self-inverse. The experiments on three datasets show that, compared with the baseline approaches that use two separate models for the image synthesis along two directions, our self-inverse network achieves better synthesis results in terms of standard metrics. Finally, our sensitivity analysis confirms the feasibility of learning a self-inverse function for the bidirectional image translation.

Thomas S. Huang - One of the best experts on this subject based on the ideXlab platform.

  • WACV - One-to-One Mapping for Unpaired Image-to-image Translation
    2020 IEEE Winter Conference on Applications of Computer Vision (WACV), 2020
    Co-Authors: Zengming Shen, S. Kevin Zhou, Yifan Chen, Bogdan Georgescu, Thomas S. Huang, Xuqi Liu
    Abstract:

    Recently image-to-image translation has attracted significant interests in the literature, starting from the successful use of the generative adversarial network (GAN), to the introduction of cyclic constraint, to extensions to multiple domains. However, in existing approaches, there is no guarantee that the Mapping between two image domains is unique or One-to-One. Here we propose a self-inverse network learning approach for unpaired image-to-image translation. Building on top of CycleGAN, we learn a self-inverse function by simply augmenting the training samples by swapping inputs and outputs during training and with separated cycle consistency loss for each Mapping direction. The outcome of such learning is a proven One-to-One Mapping function. Our extensive experiments on a variety of datasets, including cross-modal medical image synthesis, object transfiguration, and semantic labeling, consistently demonstrate clear improvement over the CycleGAN method both qualitatively and quantitatively. Especially our proposed method reaches the state-of-the-art result on the cityscapes benchmark dataset for the label to photo un-paired directional image translation.

  • towards learning a self inverse network for bidirectional image to image translation
    arXiv: Computer Vision and Pattern Recognition, 2019
    Co-Authors: Zengming Shen, Yifan Chen, Bogdan Georgescu, Shaohua Kevin Zhou, Thomas S. Huang
    Abstract:

    The One-to-One Mapping is necessary for many bidirectional image-to-image translation applications, such as MRI image synthesis as MRI images are unique to the patient. State-of-the-art approaches for image synthesis from domain X to domain Y learn a convolutional neural network that meticulously maps between the domains. A different network is typically implemented to map along the opposite direction, from Y to X. In this paper, we explore the possibility of only wielding one network for bi-directional image synthesis. In other words, such an autonomous learning network implements a self-inverse function. A self-inverse network shares several distinct advantages: only one network instead of two, better generalization and more restricted parameter space. Most importantly, a self-inverse function guarantees a One-to-One Mapping, a property that cannot be guaranteed by earlier approaches that are not self-inverse. The experiments on three datasets show that, compared with the baseline approaches that use two separate models for the image synthesis along two directions, our self-inverse network achieves better synthesis results in terms of standard metrics. Finally, our sensitivity analysis confirms the feasibility of learning a self-inverse function for the bidirectional image translation.