Shape from Shading

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 6915 Experts worldwide ranked by ideXlab platform

Ligang Liu - One of the best experts on this subject based on the ideXlab platform.

  • 3d face reconstruction with geometry details from a single image
    IEEE Transactions on Image Processing, 2018
    Co-Authors: Luo Jiang, Juyong Zhang, Bailin Deng, Ligang Liu
    Abstract:

    3D face reconstruction from a single image is a classical and challenging problem with wide applications in many areas. Inspired by recent works in face animation from RGB-D or monocular video inputs, we develop a novel method for reconstructing 3D faces from unconstrained 2D images using a coarse-to-fine optimization strategy. First, a smooth coarse 3D face is generated from an example-based bilinear face model by aligning the projection of 3D face landmarks with 2D landmarks detected from the input image. Afterward, using local corrective deformation fields, the coarse 3D face is refined using photometric consistency constraints, resulting in a medium face Shape. Finally, a Shape-from-Shading method is applied on the medium face to recover fine geometric details. Our method outperforms the state-of-the-art approaches in terms of accuracy and detail recovery, which is demonstrated in extensive experiments using real-world models and publicly available data sets.

  • 3d face reconstruction with geometry details from a single image
    arXiv: Computer Vision and Pattern Recognition, 2017
    Co-Authors: Luo Jiang, Juyong Zhang, Bailin Deng, Ligang Liu
    Abstract:

    3D face reconstruction from a single image is a classical and challenging problem, with wide applications in many areas. Inspired by recent works in face animation from RGB-D or monocular video inputs, we develop a novel method for reconstructing 3D faces from unconstrained 2D images, using a coarse-to-fine optimization strategy. First, a smooth coarse 3D face is generated from an example-based bilinear face model, by aligning the projection of 3D face landmarks with 2D landmarks detected from the input image. Afterwards, using local corrective deformation fields, the coarse 3D face is refined using photometric consistency constraints, resulting in a medium face Shape. Finally, a Shape-from-Shading method is applied on the medium face to recover fine geometric details. Our method outperforms state-of-the-art approaches in terms of accuracy and detail recovery, which is demonstrated in extensive experiments using real world models and publicly available datasets.

Rama Chellappa - One of the best experts on this subject based on the ideXlab platform.

  • illumination insensitive face recognition using symmetric Shape from Shading
    Computer Vision and Pattern Recognition, 2000
    Co-Authors: Wenyi Zhao, Rama Chellappa
    Abstract:

    Sensitivity to variations in illumination is a fundamental and challenging problem in face recognition. In this paper, we describe a new method based on symmetric Shape-from-Shading (SSFS) to develop a face recognition system that is robust to changes in illumination. The basic idea of this approach is to use the SSFS algorithm as a tool to obtain a prototype image which is illumination-normalized. It has been shown that the SSFS algorithm has a unique point-wise solution. But it is still difficult to recover accurate Shape information given a single real face image with complex Shape and varying albedo. In stead, we utilize the fact that all faces share a similar Shape making the direct computation of the prototype image from a given face image feasible. Finally, to demonstrate the efficacy of our method, we have applied it to several publicly available face databases.

  • estimation of illuminant direction albedo and Shape from Shading
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 1991
    Co-Authors: Q Zheng, Rama Chellappa
    Abstract:

    A robust approach to the recovery of Shape from Shading information is presented. Assuming uniform albedo and Lambertian surface for the imaging model, two methods for estimating the azimuth of the illuminant are presented. One is based on local estimates on smooth patches, and the other method uses Shading information along image contours. The elevation of the illuminant and surface albedo are estimated from image statistics, taking into consideration the effect of self-shadowing. With the estimated reflectance map parameters, the authors then compute the surface Shape using a procedure that implements the smoothness constraint by requiring the gradients of reconstructed density to be close to the gradients of the input image. The algorithm is data driven, stable, updates the surface slope and height maps simultaneously, and significantly reduces the residual errors in irradiance and integrability terms. A hierarchical implementation of the algorithm is presented. Typical results on synthetic and images are given to illustrate the usefulness of the approach. >

  • estimation of illuminant direction albedo and Shape from Shading
    Computer Vision and Pattern Recognition, 1991
    Co-Authors: Q Zheng, Rama Chellappa
    Abstract:

    A robust approach to recovery of Shape from Shading information is presented. Assuming uniform albedo and Lambertian surface for the imaging model, methods are presented for the estimation of illuminant direction and surface albedo. The illuminant azimuth is estimated by averaging local estimates. The illuminant elevation and surface albedo are estimated from image statistics. Using the estimated reflectance map parameters, the surface Shape is computed using a procedure that implements the smoothness constraint by enforcing the gradients of reconstructed intensity to be close to the gradients of the input image. Typical results on real images are given to illustrate the usefulness of this approach. >

Ronen Basri - One of the best experts on this subject based on the ideXlab platform.

  • statistical symmetric Shape from Shading for 3d structure recovery of faces
    European Conference on Computer Vision, 2004
    Co-Authors: Roman Dovgard, Ronen Basri
    Abstract:

    In this paper, we aim to recover the 3D Shape of a human face using a single image. We use a combination of symmetric Shape from Shading by Zhao and Chellappa and statistical approach for facial Shape reconstruction by Atick, Griffin and Redlich. Given a single frontal image of a human face under a known directional illumination from a side, we represent the solution as a linear combination of basis Shapes and recover the coefficients using a symmetry constraint on a facial Shape and albedo. By solving a single least-squares system of equations, our algorithm provides a closed-form solution which satisfies both symmetry and statistical constraints in the best possible way. Our procedure takes only a few seconds, accounts for varying facial albedo, and is simpler than the previous methods. In the special case of horizontal illuminant direction, our algorithm runs even as fast as matrix-vector multiplication.

Shiu Yin Yuen - One of the best experts on this subject based on the ideXlab platform.

  • Recovering Shape by Shading and Stereo Under Lambertian Shading Model
    International Journal of Computer Vision, 2009
    Co-Authors: Chi Kin Chow, Shiu Yin Yuen
    Abstract:

    A method that integrates Shape from Shading and stereo is reported for Lambertian objects. A rectification is proposed to convert any lighting direction from oblique to orthographic. A sparse stereo method is reported that directly uses depth information and has no foreshortening problem. The method completely solves three difficult problems in stereo, namely, recovering depth at occlusion; matching at places with similar Shading and matching at smooth silhouettes. The method has been tested on both synthetic and real images. It shows superior performance compared with two recent stereo algorithms. It is also a method based on the physics of image formation.

Quéau Yvain - One of the best experts on this subject based on the ideXlab platform.

  • Photometric 3D-reconstruction
    HAL CCSD, 2021
    Co-Authors: Quéau Yvain
    Abstract:

    International audiencePhotometric 3D-reconstruction techniques aim at inferring the geometry of a scene from one or several images, by inverting a physical model describing the image formation. This talk will present an introductory overview of the main photometric 3D-reconstruction techniques, which are Shape-from-Shading (single image) and photometric stereo (multiple images acquired under varying illumination). These techniques are among the top-performing computer vision approaches for estimating fine-scale geometric details, as well as photometric surface properties (e.g., reflectance). The talk will cover both theoretical aspects of the problem (well-posedness), numerical issues (solving using robust variational methods), and applications to cultural heritage and quality control

  • A Comprehensive Introduction to Photometric 3D-reconstruction
    'Springer Science and Business Media LLC', 2020
    Co-Authors: Durou Jean-denis, Quéau Yvain, Falcone Maurizio, Tozza Silvia
    Abstract:

    International audiencePhotometric 3D-reconstruction techniques aim at inferring the geometry of a scene from one or several images, by inverting a physical model describing the image formation. This chapter presents an introductory overview of the main pho-tometric 3D-reconstruction techniques which are Shape-from-Shading, photometric stereo and Shape-from-polarisation

  • Photometric Depth Super-Resolution
    'Institute of Electrical and Electronics Engineers (IEEE)', 2019
    Co-Authors: Haefner Bjoern, Quéau Yvain, Peng Songyou, Verma Alok, Cremers Daniel
    Abstract:

    International audienceThis study explores the use of photometric techniques (Shape-from-Shading and uncalibrated photometric stereo) for upsampling the low-resolution depth map from an RGB-D sensor to the higher resolution of the companion RGB image. A single-shot variational approach is first put forward, which is effective as long as the target's reflectance is piecewise-constant. It is then shown that this dependency upon a specific reflectance model can be relaxed by focusing on a specific class of objects (e.g., faces), and delegate reflectance estimation to a deep neural network. A multi-shot strategy based on randomly varying lighting conditions is eventually discussed. It requires no training or prior on the reflectance, yet this comes at the price of a dedicated acquisition setup. Both quantitative and qualitative evaluations illustrate the effectiveness of the proposed methods on synthetic and real-world scenarios

  • Photometric Depth Super-Resolution
    2019
    Co-Authors: Haefner Bjoern, Quéau Yvain, Peng Songyou, Verma Alok, Cremers Daniel
    Abstract:

    This study explores the use of photometric techniques (Shape-from-Shading and uncalibrated photometric stereo) for upsampling the low-resolution depth map from an RGB-D sensor to the higher resolution of the companion RGB image. A single-shot variational approach is first put forward, which is effective as long as the target's reflectance is piecewise-constant. It is then shown that this dependency upon a specific reflectance model can be relaxed by focusing on a specific class of objects (e.g., faces), and delegate reflectance estimation to a deep neural network. A multi-shot strategy based on randomly varying lighting conditions is eventually discussed. It requires no training or prior on the reflectance, yet this comes at the price of a dedicated acquisition setup. Both quantitative and qualitative evaluations illustrate the effectiveness of the proposed methods on synthetic and real-world scenarios.Comment: IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), 2019. First three authors contribute equall

  • Fight ill-posedness with ill-posedness: Single-shot variational depth super-resolution from Shading
    'Institute of Electrical and Electronics Engineers (IEEE)', 2018
    Co-Authors: Haefner Bjoern, Quéau Yvain, Moellenhoff Thomas, Cremers Daniel
    Abstract:

    International audienceWe put forward a principled variational approach for up-sampling a single depth map to the resolution of the companion color image provided by an RGB-D sensor. We combine heterogeneous depth and color data in order to jointly solve the ill-posed depth super-resolution and Shape-from-Shading problems. The low-frequency geometric information necessary to disambiguate Shape-from-Shading is extracted from the low-resolution depth measurements and, symmetrically, the high-resolution photometric clues in the RGB image provide the high-frequency information required to disambiguate depth super-resolution