Translation Problem

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 72807 Experts worldwide ranked by ideXlab platform

Maria Drangova - One of the best experts on this subject based on the ideXlab platform.

  • conditional generative adversarial network for 3d rigid body motion correction in mri
    Magnetic Resonance in Medicine, 2019
    Co-Authors: Patricia M Johnson, Maria Drangova
    Abstract:

    PURPOSE Subject motion in MRI remains an unsolved Problem; motion during image acquisition may cause blurring and artifacts that severely degrade image quality. In this work, we approach motion correction as an image-to-image Translation Problem, which refers to the approach of training a deep neural network to predict an image in 1 domain from an image in another domain. Specifically, the purpose of this work was to develop and train a conditional generative adversarial network to predict artifact-free brain images from motion-corrupted data. METHODS An open source MRI data set comprising T2 *-weighted, FLASH magnitude, and phase brain images for 53 patients was used to generate complex image data for motion simulation. To simulate rigid motion, rotations and Translations were applied to the image data based on randomly generated motion profiles. A conditional generative adversarial network, comprising a generator and discriminator networks, was trained using the motion-corrupted and corresponding ground truth (original) images as training pairs. RESULTS The images predicted by the conditional generative adversarial network have improved image quality compared to the motion-corrupted images. The mean absolute error between the motion-corrupted and ground-truth images of the test set was 16.4% of the image mean value, whereas the mean absolute error between the conditional generative adversarial network-predicted and ground-truth images was 10.8% The network output also demonstrated improved peak SNR and structural similarity index for all test-set images. CONCLUSION The images predicted by the conditional generative adversarial network have quantitatively and qualitatively improved image quality compared to the motion-corrupted images.

Patricia M Johnson - One of the best experts on this subject based on the ideXlab platform.

  • conditional generative adversarial network for 3d rigid body motion correction in mri
    Magnetic Resonance in Medicine, 2019
    Co-Authors: Patricia M Johnson, Maria Drangova
    Abstract:

    PURPOSE Subject motion in MRI remains an unsolved Problem; motion during image acquisition may cause blurring and artifacts that severely degrade image quality. In this work, we approach motion correction as an image-to-image Translation Problem, which refers to the approach of training a deep neural network to predict an image in 1 domain from an image in another domain. Specifically, the purpose of this work was to develop and train a conditional generative adversarial network to predict artifact-free brain images from motion-corrupted data. METHODS An open source MRI data set comprising T2 *-weighted, FLASH magnitude, and phase brain images for 53 patients was used to generate complex image data for motion simulation. To simulate rigid motion, rotations and Translations were applied to the image data based on randomly generated motion profiles. A conditional generative adversarial network, comprising a generator and discriminator networks, was trained using the motion-corrupted and corresponding ground truth (original) images as training pairs. RESULTS The images predicted by the conditional generative adversarial network have improved image quality compared to the motion-corrupted images. The mean absolute error between the motion-corrupted and ground-truth images of the test set was 16.4% of the image mean value, whereas the mean absolute error between the conditional generative adversarial network-predicted and ground-truth images was 10.8% The network output also demonstrated improved peak SNR and structural similarity index for all test-set images. CONCLUSION The images predicted by the conditional generative adversarial network have quantitatively and qualitatively improved image quality compared to the motion-corrupted images.

Yu Han - One of the best experts on this subject based on the ideXlab platform.

  • from design draft to real attire unaligned fashion image Translation
    arXiv: Computer Vision and Pattern Recognition, 2020
    Co-Authors: Yu Han, Shuai Yang, Wenjing Wang, Jiaying Liu
    Abstract:

    Fashion manipulation has attracted growing interest due to its great application value, which inspires many researches towards fashion images. However, little attention has been paid to fashion design draft. In this paper, we study a new unaligned Translation Problem between design drafts and real fashion items, whose main challenge lies in the huge misalignment between the two modalities. We first collect paired design drafts and real fashion item images without pixel-wise alignment. To solve the misalignment Problem, our main idea is to train a sampling network to adaptively adjust the input to an intermediate state with structure alignment to the output. Moreover, built upon the sampling network, we present design draft to real fashion item Translation network (D2RNet), where two separate Translation streams that focus on texture and shape, respectively, are combined tactfully to get both benefits. D2RNet is able to generate realistic garments with both texture and shape consistency to their design drafts. We show that this idea can be effectively applied to the reverse Translation Problem and present R2DNet accordingly. Extensive experiments on unaligned fashion design Translation demonstrate the superiority of our method over state-of-the-art methods. Our project website is available at: this https URL .

  • From Design Draft to Real Attire: Unaligned Fashion Image Translation
    'Association for Computing Machinery (ACM)', 2020
    Co-Authors: Yu Han, Yang Shuai, Wang Wenjing, Liu Jiaying
    Abstract:

    Fashion manipulation has attracted growing interest due to its great application value, which inspires many researches towards fashion images. However, little attention has been paid to fashion design draft. In this paper, we study a new unaligned Translation Problem between design drafts and real fashion items, whose main challenge lies in the huge misalignment between the two modalities. We first collect paired design drafts and real fashion item images without pixel-wise alignment. To solve the misalignment Problem, our main idea is to train a sampling network to adaptively adjust the input to an intermediate state with structure alignment to the output. Moreover, built upon the sampling network, we present design draft to real fashion item Translation network (D2RNet), where two separate Translation streams that focus on texture and shape, respectively, are combined tactfully to get both benefits. D2RNet is able to generate realistic garments with both texture and shape consistency to their design drafts. We show that this idea can be effectively applied to the reverse Translation Problem and present R2DNet accordingly. Extensive experiments on unaligned fashion design Translation demonstrate the superiority of our method over state-of-the-art methods. Our project website is available at: https://victoriahy.github.io/MM2020/ .Comment: Accepted by ACMMM 2020. Our project website is available at: https://victoriahy.github.io/MM2020

Sudheer Kolachina - One of the best experts on this subject based on the ideXlab platform.

  • modeling letter to phoneme conversion as a phrase based statistical machine Translation Problem with minimum error rate training
    North American Chapter of the Association for Computational Linguistics, 2009
    Co-Authors: Taraka Rama, Anil Kumar Singh, Sudheer Kolachina
    Abstract:

    Letter-to-phoneme conversion plays an important role in several applications. It can be a difficult task because the mapping from letters to phonemes can be many-to-many. We present a language independent letter-to-phoneme conversion approach which is based on the popular phrase based Statistical Machine Translation techniques. The results of our experiments clearly demonstrate that such techniques can be used effectively for letter-to-phoneme conversion. Our results show an overall improvement of 5.8% over the baseline and are comparable to the state of the art. We also propose a measure to estimate the difficulty level of L2P task for a language.

Taraka Rama - One of the best experts on this subject based on the ideXlab platform.