Image Synthesis

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 327 Experts worldwide ranked by ideXlab platform

Vladlen Koltun - One of the best experts on this subject based on the ideXlab platform.

  • Semi-Parametric Image Synthesis
    2018 IEEE CVF Conference on Computer Vision and Pattern Recognition, 2018
    Co-Authors: Xiaojuan Qi, Qifeng Chen, Vladlen Koltun
    Abstract:

    We present a semi-parametric approach to photographic Image Synthesis from semantic layouts. The approach combines the complementary strengths of parametric and nonparametric techniques. The nonparametric component is a memory bank of Image segments constructed from a training set of Images. Given a novel semantic layout at test time, the memory bank is used to retrieve photographic references that are provided as source material to a deep network. The Synthesis is performed by a deep network that draws on the provided photographic material. Experiments on multiple semantic segmentation datasets show that the presented approach yields considerably more realistic Images than recent purely parametric techniques.

  • semi parametric Image Synthesis
    arXiv: Computer Vision and Pattern Recognition, 2018
    Co-Authors: Qifeng Chen, Jiaya Jia, Vladlen Koltun
    Abstract:

    We present a semi-parametric approach to photographic Image Synthesis from semantic layouts. The approach combines the complementary strengths of parametric and nonparametric techniques. The nonparametric component is a memory bank of Image segments constructed from a training set of Images. Given a novel semantic layout at test time, the memory bank is used to retrieve photographic references that are provided as source material to a deep network. The Synthesis is performed by a deep network that draws on the provided photographic material. Experiments on multiple semantic segmentation datasets show that the presented approach yields considerably more realistic Images than recent purely parametric techniques. The results are shown in the supplementary video at this https URL

James Hays - One of the best experts on this subject based on the ideXlab platform.

  • Scribbler: Controlling Deep Image Synthesis with Sketch and Color
    2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017
    Co-Authors: Patsorn Sangkloy, Chen Fang, James Hays
    Abstract:

    Several recent works have used deep convolutional networks to generate realistic Imagery. These methods sidestep the traditional computer graphics rendering pipeline and instead generate Imagery at the pixel level by learning from large collections of photos (e.g. faces or bedrooms). However, these methods are of limited utility because it is difficult for a user to control what the network produces. In this paper, we propose a deep adversarial Image Synthesis architecture that is conditioned on sketched boundaries and sparse color strokes to generate realistic cars, bedrooms, or faces. We demonstrate a sketch based Image Synthesis system which allows users to scribble over the sketch to indicate preferred color for objects. Our network can then generate convincing Images that satisfy both the color and the sketch constraints of user. The network is feed-forward which allows users to see the effect of their edits in real time. We compare to recent work on sketch to Image Synthesis and show that our approach generates more realistic, diverse, and controllable outputs. The architecture is also effective at user-guided colorization of grayscale Images.

  • TextureGAN: Controlling Deep Image Synthesis with Texture Patches
    arXiv: Computer Vision and Pattern Recognition, 2017
    Co-Authors: Wenqi Xian, Patsorn Sangkloy, Chen Fang, Varun Agrawal, Amit Raj, James Hays
    Abstract:

    In this paper, we investigate deep Image Synthesis guided by sketch, color, and texture. Previous Image Synthesis methods can be controlled by sketch and color strokes but we are the first to examine texture control. We allow a user to place a texture patch on a sketch at arbitrary locations and scales to control the desired output texture. Our generative network learns to synthesize objects consistent with these texture suggestions. To achieve this, we develop a local texture loss in addition to adversarial and content loss to train the generative network. We conduct experiments using sketches generated from real Images and textures sampled from a separate texture database and results show that our proposed algorithm is able to generate plausible Images that are faithful to user controls. Ablation studies show that our proposed pipeline can generate more realistic Images than adapting existing methods directly.

  • CVPR - Scribbler: Controlling Deep Image Synthesis with Sketch and Color
    2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017
    Co-Authors: Patsorn Sangkloy, Chen Fang, James Hays
    Abstract:

    Several recent works have used deep convolutional networks to generate realistic Imagery. These methods sidestep the traditional computer graphics rendering pipeline and instead generate Imagery at the pixel level by learning from large collections of photos (e.g. faces or bedrooms). However, these methods are of limited utility because it is difficult for a user to control what the network produces. In this paper, we propose a deep adversarial Image Synthesis architecture that is conditioned on sketched boundaries and sparse color strokes to generate realistic cars, bedrooms, or faces. We demonstrate a sketch based Image Synthesis system which allows users to scribble over the sketch to indicate preferred color for objects. Our network can then generate convincing Images that satisfy both the color and the sketch constraints of user. The network is feed-forward which allows users to see the effect of their edits in real time. We compare to recent work on sketch to Image Synthesis and show that our approach generates more realistic, diverse, and controllable outputs. The architecture is also effective at user-guided colorization of grayscale Images.

  • Scribbler: Controlling Deep Image Synthesis with Sketch and Color
    arXiv: Computer Vision and Pattern Recognition, 2016
    Co-Authors: Patsorn Sangkloy, Chen Fang, James Hays
    Abstract:

    Recently, there have been several promising methods to generate realistic Imagery from deep convolutional networks. These methods sidestep the traditional computer graphics rendering pipeline and instead generate Imagery at the pixel level by learning from large collections of photos (e.g. faces or bedrooms). However, these methods are of limited utility because it is difficult for a user to control what the network produces. In this paper, we propose a deep adversarial Image Synthesis architecture that is conditioned on sketched boundaries and sparse color strokes to generate realistic cars, bedrooms, or faces. We demonstrate a sketch based Image Synthesis system which allows users to 'scribble' over the sketch to indicate preferred color for objects. Our network can then generate convincing Images that satisfy both the color and the sketch constraints of user. The network is feed-forward which allows users to see the effect of their edits in real time. We compare to recent work on sketch to Image Synthesis and show that our approach can generate more realistic, more diverse, and more controllable outputs. The architecture is also effective at user-guided colorization of grayscale Images.

László Neumann - One of the best experts on this subject based on the ideXlab platform.

  • Systematic Sampling in Image-Synthesis
    Lecture Notes in Computer Science, 2006
    Co-Authors: Mateu Sbert, Jaume Rigau, Miquel Feixas, László Neumann
    Abstract:

    In this paper we investigate systematic sampling in the Image-Synthesis context. Systematic sampling has been widely used in stereology to improve the efficiency of different probes in experimental design. These designs are theoretically based on estimators of 1-dimensional and 2-dimensional integrals. For the particular case of the characteristic function, the variance of these estimators has been shown to be asymptotically N -3/2 , which improves on the O(N -1 ) behaviour of independent estimators using uniform sampling. Thus, when no a priori knowledge of the integrand function is available, like in several Image Synthesis techniques, systematic sampling efficiently reduces the computational cost.

  • ICCSA (1) - Systematic sampling in Image-Synthesis
    Computational Science and Its Applications - ICCSA 2006, 2006
    Co-Authors: Mateu Sbert, Jaume Rigau, Miquel Feixas, László Neumann
    Abstract:

    In this paper we investigate systematic sampling in the Image- Synthesis context. Systematic sampling has been widely used in stereology to improve the efficiency of different probes in experimental design. These designs are theoretically based on estimators of 1-dimensional and 2-dimensional integrals. For the particular case of the characteristic function, the variance of these estimators has been shown to be asymptotically N −−3/2, which improves on the O(N −−1) behaviour of independent estimators using uniform sampling. Thus, when no a priori knowledge of the integrand function is available, like in several Image Synthesis techniques, systematic sampling efficiently reduces the computational cost.

Xiaojuan Qi - One of the best experts on this subject based on the ideXlab platform.

  • Semi-Parametric Image Synthesis
    2018 IEEE CVF Conference on Computer Vision and Pattern Recognition, 2018
    Co-Authors: Xiaojuan Qi, Qifeng Chen, Vladlen Koltun
    Abstract:

    We present a semi-parametric approach to photographic Image Synthesis from semantic layouts. The approach combines the complementary strengths of parametric and nonparametric techniques. The nonparametric component is a memory bank of Image segments constructed from a training set of Images. Given a novel semantic layout at test time, the memory bank is used to retrieve photographic references that are provided as source material to a deep network. The Synthesis is performed by a deep network that draws on the provided photographic material. Experiments on multiple semantic segmentation datasets show that the presented approach yields considerably more realistic Images than recent purely parametric techniques.

Qifeng Chen - One of the best experts on this subject based on the ideXlab platform.

  • Semi-Parametric Image Synthesis
    2018 IEEE CVF Conference on Computer Vision and Pattern Recognition, 2018
    Co-Authors: Xiaojuan Qi, Qifeng Chen, Vladlen Koltun
    Abstract:

    We present a semi-parametric approach to photographic Image Synthesis from semantic layouts. The approach combines the complementary strengths of parametric and nonparametric techniques. The nonparametric component is a memory bank of Image segments constructed from a training set of Images. Given a novel semantic layout at test time, the memory bank is used to retrieve photographic references that are provided as source material to a deep network. The Synthesis is performed by a deep network that draws on the provided photographic material. Experiments on multiple semantic segmentation datasets show that the presented approach yields considerably more realistic Images than recent purely parametric techniques.

  • semi parametric Image Synthesis
    arXiv: Computer Vision and Pattern Recognition, 2018
    Co-Authors: Qifeng Chen, Jiaya Jia, Vladlen Koltun
    Abstract:

    We present a semi-parametric approach to photographic Image Synthesis from semantic layouts. The approach combines the complementary strengths of parametric and nonparametric techniques. The nonparametric component is a memory bank of Image segments constructed from a training set of Images. Given a novel semantic layout at test time, the memory bank is used to retrieve photographic references that are provided as source material to a deep network. The Synthesis is performed by a deep network that draws on the provided photographic material. Experiments on multiple semantic segmentation datasets show that the presented approach yields considerably more realistic Images than recent purely parametric techniques. The results are shown in the supplementary video at this https URL