One-to-One

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 138027132 Experts worldwide ranked by ideXlab platform

Xuqi Liu - One of the best experts on this subject based on the ideXlab platform.

  • WACV - One-to-One Mapping for Unpaired Image-to-image Translation
    2020 IEEE Winter Conference on Applications of Computer Vision (WACV), 2020
    Co-Authors: Zengming Shen, S. Kevin Zhou, Yifan Chen, Bogdan Georgescu, Thomas S. Huang, Xuqi Liu
    Abstract:

    Recently image-to-image translation has attracted significant interests in the literature, starting from the successful use of the generative adversarial network (GAN), to the introduction of cyclic constraint, to extensions to multiple domains. However, in existing approaches, there is no guarantee that the mapping between two image domains is unique or One-to-One. Here we propose a self-inverse network learning approach for unpaired image-to-image translation. Building on top of CycleGAN, we learn a self-inverse function by simply augmenting the training samples by swapping inputs and outputs during training and with separated cycle consistency loss for each mapping direction. The outcome of such learning is a proven One-to-One mapping function. Our extensive experiments on a variety of datasets, including cross-modal medical image synthesis, object transfiguration, and semantic labeling, consistently demonstrate clear improvement over the CycleGAN method both qualitatively and quantitatively. Especially our proposed method reaches the state-of-the-art result on the cityscapes benchmark dataset for the label to photo un-paired directional image translation.

  • One-to-One Mapping for Unpaired Image-to-image Translation
    arXiv: Computer Vision and Pattern Recognition, 2019
    Co-Authors: Zengming Shen, S. Kevin Zhou, Yifan Chen, Bogdan Georgescu, Xuqi Liu, Thomas Huang
    Abstract:

    Recently image-to-image translation has attracted significant interests in the literature, starting from the successful use of the generative adversarial network (GAN), to the introduction of cyclic constraint, to extensions to multiple domains. However, in existing approaches, there is no guarantee that the mapping between two image domains is unique or One-to-One. Here we propose a self-inverse network learning approach for unpaired image-to-image translation. Building on top of CycleGAN, we learn a self-inverse function by simply augmenting the training samples by swapping inputs and outputs during training and with separated cycle consistency loss for each mapping direction. The outcome of such learning is a proven One-to-One mapping function. Our extensive experiments on a variety of datasets, including cross-modal medical image synthesis, object transfiguration, and semantic labeling, consistently demonstrate clear improvement over the CycleGAN method both qualitatively and quantitatively. Especially our proposed method reaches the state-of-the-art result on the cityscapes benchmark dataset for the label to photo unpaired directional image translation.

Zengming Shen - One of the best experts on this subject based on the ideXlab platform.

  • WACV - One-to-One Mapping for Unpaired Image-to-image Translation
    2020 IEEE Winter Conference on Applications of Computer Vision (WACV), 2020
    Co-Authors: Zengming Shen, S. Kevin Zhou, Yifan Chen, Bogdan Georgescu, Thomas S. Huang, Xuqi Liu
    Abstract:

    Recently image-to-image translation has attracted significant interests in the literature, starting from the successful use of the generative adversarial network (GAN), to the introduction of cyclic constraint, to extensions to multiple domains. However, in existing approaches, there is no guarantee that the mapping between two image domains is unique or One-to-One. Here we propose a self-inverse network learning approach for unpaired image-to-image translation. Building on top of CycleGAN, we learn a self-inverse function by simply augmenting the training samples by swapping inputs and outputs during training and with separated cycle consistency loss for each mapping direction. The outcome of such learning is a proven One-to-One mapping function. Our extensive experiments on a variety of datasets, including cross-modal medical image synthesis, object transfiguration, and semantic labeling, consistently demonstrate clear improvement over the CycleGAN method both qualitatively and quantitatively. Especially our proposed method reaches the state-of-the-art result on the cityscapes benchmark dataset for the label to photo un-paired directional image translation.

  • One-to-One Mapping for Unpaired Image-to-image Translation
    arXiv: Computer Vision and Pattern Recognition, 2019
    Co-Authors: Zengming Shen, S. Kevin Zhou, Yifan Chen, Bogdan Georgescu, Xuqi Liu, Thomas Huang
    Abstract:

    Recently image-to-image translation has attracted significant interests in the literature, starting from the successful use of the generative adversarial network (GAN), to the introduction of cyclic constraint, to extensions to multiple domains. However, in existing approaches, there is no guarantee that the mapping between two image domains is unique or One-to-One. Here we propose a self-inverse network learning approach for unpaired image-to-image translation. Building on top of CycleGAN, we learn a self-inverse function by simply augmenting the training samples by swapping inputs and outputs during training and with separated cycle consistency loss for each mapping direction. The outcome of such learning is a proven One-to-One mapping function. Our extensive experiments on a variety of datasets, including cross-modal medical image synthesis, object transfiguration, and semantic labeling, consistently demonstrate clear improvement over the CycleGAN method both qualitatively and quantitatively. Especially our proposed method reaches the state-of-the-art result on the cityscapes benchmark dataset for the label to photo unpaired directional image translation.

Kan Ren - One of the best experts on this subject based on the ideXlab platform.

  • AAAI - Guiding the One-to-One Mapping in CycleGAN via Optimal Transport
    Proceedings of the AAAI Conference on Artificial Intelligence, 2019
    Co-Authors: Zhiming Zhou, Yuxuan Song, Kan Ren
    Abstract:

    CycleGAN is capable of learning a One-to-One mapping between two data distributions without paired examples, achieving the task of unsupervised data translation. However, there is no theoretical guarantee on the property of the learned One-to-One mapping in CycleGAN. In this paper, we experimentally find that, under some circumstances, the One-to-One mapping learned by CycleGAN is just a random one within the large feasible solution space. Based on this observation, we explore to add extra constraints such that the One-to-One mapping is controllable and satisfies more properties related to specific tasks. We propose to solve an optimal transport mapping restrained by a task-specific cost function that reflects the desired properties, and use the barycenters of optimal transport mapping to serve as references for CycleGAN. Our experiments indicate that the proposed algorithm is capable of learning a One-to-One mapping with the desired properties.

  • Guiding the One-to-One Mapping in CycleGAN via Optimal Transport
    arXiv: Computer Vision and Pattern Recognition, 2018
    Co-Authors: Zhiming Zhou, Yuxuan Song, Kan Ren
    Abstract:

    CycleGAN is capable of learning a One-to-One mapping between two data distributions without paired examples, achieving the task of unsupervised data translation. However, there is no theoretical guarantee on the property of the learned One-to-One mapping in CycleGAN. In this paper, we experimentally find that, under some circumstances, the One-to-One mapping learned by CycleGAN is just a random one within the large feasible solution space. Based on this observation, we explore to add extra constraints such that the One-to-One mapping is controllable and satisfies more properties related to specific tasks. We propose to solve an optimal transport mapping restrained by a task-specific cost function that reflects the desired properties, and use the barycenters of optimal transport mapping to serve as references for CycleGAN. Our experiments indicate that the proposed algorithm is capable of learning a One-to-One mapping with the desired properties.

Bogdan Georgescu - One of the best experts on this subject based on the ideXlab platform.

  • WACV - One-to-One Mapping for Unpaired Image-to-image Translation
    2020 IEEE Winter Conference on Applications of Computer Vision (WACV), 2020
    Co-Authors: Zengming Shen, S. Kevin Zhou, Yifan Chen, Bogdan Georgescu, Thomas S. Huang, Xuqi Liu
    Abstract:

    Recently image-to-image translation has attracted significant interests in the literature, starting from the successful use of the generative adversarial network (GAN), to the introduction of cyclic constraint, to extensions to multiple domains. However, in existing approaches, there is no guarantee that the mapping between two image domains is unique or One-to-One. Here we propose a self-inverse network learning approach for unpaired image-to-image translation. Building on top of CycleGAN, we learn a self-inverse function by simply augmenting the training samples by swapping inputs and outputs during training and with separated cycle consistency loss for each mapping direction. The outcome of such learning is a proven One-to-One mapping function. Our extensive experiments on a variety of datasets, including cross-modal medical image synthesis, object transfiguration, and semantic labeling, consistently demonstrate clear improvement over the CycleGAN method both qualitatively and quantitatively. Especially our proposed method reaches the state-of-the-art result on the cityscapes benchmark dataset for the label to photo un-paired directional image translation.

  • One-to-One Mapping for Unpaired Image-to-image Translation
    arXiv: Computer Vision and Pattern Recognition, 2019
    Co-Authors: Zengming Shen, S. Kevin Zhou, Yifan Chen, Bogdan Georgescu, Xuqi Liu, Thomas Huang
    Abstract:

    Recently image-to-image translation has attracted significant interests in the literature, starting from the successful use of the generative adversarial network (GAN), to the introduction of cyclic constraint, to extensions to multiple domains. However, in existing approaches, there is no guarantee that the mapping between two image domains is unique or One-to-One. Here we propose a self-inverse network learning approach for unpaired image-to-image translation. Building on top of CycleGAN, we learn a self-inverse function by simply augmenting the training samples by swapping inputs and outputs during training and with separated cycle consistency loss for each mapping direction. The outcome of such learning is a proven One-to-One mapping function. Our extensive experiments on a variety of datasets, including cross-modal medical image synthesis, object transfiguration, and semantic labeling, consistently demonstrate clear improvement over the CycleGAN method both qualitatively and quantitatively. Especially our proposed method reaches the state-of-the-art result on the cityscapes benchmark dataset for the label to photo unpaired directional image translation.

Yifan Chen - One of the best experts on this subject based on the ideXlab platform.

  • WACV - One-to-One Mapping for Unpaired Image-to-image Translation
    2020 IEEE Winter Conference on Applications of Computer Vision (WACV), 2020
    Co-Authors: Zengming Shen, S. Kevin Zhou, Yifan Chen, Bogdan Georgescu, Thomas S. Huang, Xuqi Liu
    Abstract:

    Recently image-to-image translation has attracted significant interests in the literature, starting from the successful use of the generative adversarial network (GAN), to the introduction of cyclic constraint, to extensions to multiple domains. However, in existing approaches, there is no guarantee that the mapping between two image domains is unique or One-to-One. Here we propose a self-inverse network learning approach for unpaired image-to-image translation. Building on top of CycleGAN, we learn a self-inverse function by simply augmenting the training samples by swapping inputs and outputs during training and with separated cycle consistency loss for each mapping direction. The outcome of such learning is a proven One-to-One mapping function. Our extensive experiments on a variety of datasets, including cross-modal medical image synthesis, object transfiguration, and semantic labeling, consistently demonstrate clear improvement over the CycleGAN method both qualitatively and quantitatively. Especially our proposed method reaches the state-of-the-art result on the cityscapes benchmark dataset for the label to photo un-paired directional image translation.

  • One-to-One Mapping for Unpaired Image-to-image Translation
    arXiv: Computer Vision and Pattern Recognition, 2019
    Co-Authors: Zengming Shen, S. Kevin Zhou, Yifan Chen, Bogdan Georgescu, Xuqi Liu, Thomas Huang
    Abstract:

    Recently image-to-image translation has attracted significant interests in the literature, starting from the successful use of the generative adversarial network (GAN), to the introduction of cyclic constraint, to extensions to multiple domains. However, in existing approaches, there is no guarantee that the mapping between two image domains is unique or One-to-One. Here we propose a self-inverse network learning approach for unpaired image-to-image translation. Building on top of CycleGAN, we learn a self-inverse function by simply augmenting the training samples by swapping inputs and outputs during training and with separated cycle consistency loss for each mapping direction. The outcome of such learning is a proven One-to-One mapping function. Our extensive experiments on a variety of datasets, including cross-modal medical image synthesis, object transfiguration, and semantic labeling, consistently demonstrate clear improvement over the CycleGAN method both qualitatively and quantitatively. Especially our proposed method reaches the state-of-the-art result on the cityscapes benchmark dataset for the label to photo unpaired directional image translation.