Pixel Intensity

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 27147 Experts worldwide ranked by ideXlab platform

Thomas Vetter - One of the best experts on this subject based on the ideXlab platform.

  • estimating 3d shape and texture using Pixel Intensity edges specular highlights texture constraints and a prior
    Computer Vision and Pattern Recognition, 2005
    Co-Authors: Sami Romdhani, Thomas Vetter
    Abstract:

    We present a novel algorithm aiming to estimate the 3D shape, the texture of a human face, along with the 3D pose and the light direction from a single photograph by recovering the parameters of a 3D morphable model. Generally, the algorithms tackling the problem of 3D shape estimation from image data use only the Pixels Intensity as input to drive the estimation process. This was previously achieved using either a simple model, such as the Lambertian reflectance model, leading to a linear fitting algorithm. Alternatively, this problem was addressed using a more precise model and minimizing a non-convex cost function with many local minima. One way to reduce the local minima problem is to use a stochastic optimization algorithm. However, the convergence properties (such as the radius of convergence) of such algorithms, are limited. Here, as well as the Pixel Intensity, we use various image features such as the edges or the location of the specular highlights. The 3D shape, texture and imaging parameters are then estimated by maximizing the posterior of the parameters given these image features. The overall cost function obtained is smoother and, hence, a stochastic optimization algorithm is not needed to avoid the local minima problem. This leads to the multi-features fitting algorithm that has a wider radius of convergence and a higher level of precision. This is shown on some example photographs, and on a recognition experiment performed on the CMU-PIE image database.

  • CVPR (2) - Estimating 3D shape and texture using Pixel Intensity, edges, specular highlights, texture constraints and a prior
    2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), 1
    Co-Authors: Sami Romdhani, Thomas Vetter
    Abstract:

    We present a novel algorithm aiming to estimate the 3D shape, the texture of a human face, along with the 3D pose and the light direction from a single photograph by recovering the parameters of a 3D morphable model. Generally, the algorithms tackling the problem of 3D shape estimation from image data use only the Pixels Intensity as input to drive the estimation process. This was previously achieved using either a simple model, such as the Lambertian reflectance model, leading to a linear fitting algorithm. Alternatively, this problem was addressed using a more precise model and minimizing a non-convex cost function with many local minima. One way to reduce the local minima problem is to use a stochastic optimization algorithm. However, the convergence properties (such as the radius of convergence) of such algorithms, are limited. Here, as well as the Pixel Intensity, we use various image features such as the edges or the location of the specular highlights. The 3D shape, texture and imaging parameters are then estimated by maximizing the posterior of the parameters given these image features. The overall cost function obtained is smoother and, hence, a stochastic optimization algorithm is not needed to avoid the local minima problem. This leads to the multi-features fitting algorithm that has a wider radius of convergence and a higher level of precision. This is shown on some example photographs, and on a recognition experiment performed on the CMU-PIE image database.

Sami Romdhani - One of the best experts on this subject based on the ideXlab platform.

  • estimating 3d shape and texture using Pixel Intensity edges specular highlights texture constraints and a prior
    Computer Vision and Pattern Recognition, 2005
    Co-Authors: Sami Romdhani, Thomas Vetter
    Abstract:

    We present a novel algorithm aiming to estimate the 3D shape, the texture of a human face, along with the 3D pose and the light direction from a single photograph by recovering the parameters of a 3D morphable model. Generally, the algorithms tackling the problem of 3D shape estimation from image data use only the Pixels Intensity as input to drive the estimation process. This was previously achieved using either a simple model, such as the Lambertian reflectance model, leading to a linear fitting algorithm. Alternatively, this problem was addressed using a more precise model and minimizing a non-convex cost function with many local minima. One way to reduce the local minima problem is to use a stochastic optimization algorithm. However, the convergence properties (such as the radius of convergence) of such algorithms, are limited. Here, as well as the Pixel Intensity, we use various image features such as the edges or the location of the specular highlights. The 3D shape, texture and imaging parameters are then estimated by maximizing the posterior of the parameters given these image features. The overall cost function obtained is smoother and, hence, a stochastic optimization algorithm is not needed to avoid the local minima problem. This leads to the multi-features fitting algorithm that has a wider radius of convergence and a higher level of precision. This is shown on some example photographs, and on a recognition experiment performed on the CMU-PIE image database.

  • CVPR (2) - Estimating 3D shape and texture using Pixel Intensity, edges, specular highlights, texture constraints and a prior
    2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), 1
    Co-Authors: Sami Romdhani, Thomas Vetter
    Abstract:

    We present a novel algorithm aiming to estimate the 3D shape, the texture of a human face, along with the 3D pose and the light direction from a single photograph by recovering the parameters of a 3D morphable model. Generally, the algorithms tackling the problem of 3D shape estimation from image data use only the Pixels Intensity as input to drive the estimation process. This was previously achieved using either a simple model, such as the Lambertian reflectance model, leading to a linear fitting algorithm. Alternatively, this problem was addressed using a more precise model and minimizing a non-convex cost function with many local minima. One way to reduce the local minima problem is to use a stochastic optimization algorithm. However, the convergence properties (such as the radius of convergence) of such algorithms, are limited. Here, as well as the Pixel Intensity, we use various image features such as the edges or the location of the specular highlights. The 3D shape, texture and imaging parameters are then estimated by maximizing the posterior of the parameters given these image features. The overall cost function obtained is smoother and, hence, a stochastic optimization algorithm is not needed to avoid the local minima problem. This leads to the multi-features fitting algorithm that has a wider radius of convergence and a higher level of precision. This is shown on some example photographs, and on a recognition experiment performed on the CMU-PIE image database.

Xiaodong Cai - One of the best experts on this subject based on the ideXlab platform.

  • robust online video background reconstruction using optical flow and Pixel Intensity distribution
    International Conference on Communications, 2008
    Co-Authors: Xiaodong Cai
    Abstract:

    Obtaining a dynamically updated background reference image is an important and challenging task for video applications using background subtraction. This paper proposes a novel algorithm for online video background reconstruction. Firstly, multiple candidates of background values at each Pixel are obtained by locating subintervals of stable Intensity in a processing period. Then criteria based on Pixel Intensity distribution and local optical flows are employed to decide the most likely candidate to represent the background. For the methods utilizing the distributions of Intensity values, the decision of determining the background value at a Pixel position is based on the observation that the appearance time and sub- period frequency of the background is higher than non- background. An enhanced method using neighborhood optical flow information is adopted for more precise decision with slightly additional computation by identifying the events of covering and revealing of a Pixel position. The experimental results show that the proposed algorithm outperforms existing adaptive mixture Gaussian background model and provides robust, efficient background image reconstruction in complex and busy environment.

  • ICC - Robust Online Video Background Reconstruction Using Optical Flow and Pixel Intensity Distribution
    2008 IEEE International Conference on Communications, 2008
    Co-Authors: Xiaodong Cai
    Abstract:

    Obtaining a dynamically updated background reference image is an important and challenging task for video applications using background subtraction. This paper proposes a novel algorithm for online video background reconstruction. Firstly, multiple candidates of background values at each Pixel are obtained by locating subintervals of stable Intensity in a processing period. Then criteria based on Pixel Intensity distribution and local optical flows are employed to decide the most likely candidate to represent the background. For the methods utilizing the distributions of Intensity values, the decision of determining the background value at a Pixel position is based on the observation that the appearance time and sub- period frequency of the background is higher than non- background. An enhanced method using neighborhood optical flow information is adopted for more precise decision with slightly additional computation by identifying the events of covering and revealing of a Pixel position. The experimental results show that the proposed algorithm outperforms existing adaptive mixture Gaussian background model and provides robust, efficient background image reconstruction in complex and busy environment.

Isabelle Truyers - One of the best experts on this subject based on the ideXlab platform.

  • the value of trans scrotal ultrasonography at bull breeding soundness evaluation bbse the relationship between testicular parenchymal Pixel Intensity and semen quality
    Theriogenology, 2017
    Co-Authors: Martin Tomlinson, Amy Jennings, Alastair Macrae, Isabelle Truyers
    Abstract:

    Bull breeding soundness evaluation (BBSE) is commonly undertaken to identify bulls that are potentially unfit for use as breeding sires. Various studies worldwide have found that approximately 20% of the bulls fail their routine prebreeding BBSE and are therefore considered subfertile. Multiple articles describe the use of testicular ultrasound as a noninvasive aid in the identification of specific testicular and epididymal lesions. Two previous studies have hypothesized a correlation between ultrasonographic testicular parenchymal Pixel Intensity (PI) and semen quality; however to date, no published studies have specifically examined this link. The aim of this study, therefore, was to assess the relationship between testicular parenchymal PI (measured using trans-scrotal ultrasonography) and semen quality (measured at BBSE), and the usefulness of testicular ultrasonography as an aid in predicting future fertility in bulls, in particular those that are deemed subfertile at the first examination. A total of 162 bulls from 35 farms in the South East of Scotland were submitted to routine BBSE and testicular ultrasonography between March and May 2014, and March and May 2015. Thirty-three animals failed their initial examination (BBSE1) due to poor semen quality, and were re-examined (BBSE2) 6 to 8 weeks later. Computer-aided image analysis and gross visual lesion scoring were performed on all ultrasonograms, and results were compared to semen quality at BBSE1 and BBSE2. The PI measurements were practical and repeatable in a field setting, and although the results of this study did not highlight any biological correlation between semen quality at BBSE1 or BBSE2 and testicular PI, it did identify that gross visual lesion scoring of testicular images is comparable to computer analysis of PI (P < 0.001) in identifying animals suffering from gross testicular fibrosis.

  • The value of trans-scrotal ultrasonography at bull breeding soundness evaluation (BBSE): The relationship between testicular parenchymal Pixel Intensity and semen quality
    Theriogenology, 2016
    Co-Authors: Martin Tomlinson, Amy Jennings, Alastair Macrae, Isabelle Truyers
    Abstract:

    Bull breeding soundness evaluation (BBSE) is commonly undertaken to identify bulls that are potentially unfit for use as breeding sires. Various studies worldwide have found that approximately 20% of the bulls fail their routine prebreeding BBSE and are therefore considered subfertile. Multiple articles describe the use of testicular ultrasound as a noninvasive aid in the identification of specific testicular and epididymal lesions. Two previous studies have hypothesized a correlation between ultrasonographic testicular parenchymal Pixel Intensity (PI) and semen quality; however to date, no published studies have specifically examined this link. The aim of this study, therefore, was to assess the relationship between testicular parenchymal PI (measured using trans-scrotal ultrasonography) and semen quality (measured at BBSE), and the usefulness of testicular ultrasonography as an aid in predicting future fertility in bulls, in particular those that are deemed subfertile at the first examination. A total of 162 bulls from 35 farms in the South East of Scotland were submitted to routine BBSE and testicular ultrasonography between March and May 2014, and March and May 2015. Thirty-three animals failed their initial examination (BBSE1) due to poor semen quality, and were re-examined (BBSE2) 6 to 8 weeks later. Computer-aided image analysis and gross visual lesion scoring were performed on all ultrasonograms, and results were compared to semen quality at BBSE1 and BBSE2. The PI measurements were practical and repeatable in a field setting, and although the results of this study did not highlight any biological correlation between semen quality at BBSE1 or BBSE2 and testicular PI, it did identify that gross visual lesion scoring of testicular images is comparable to computer analysis of PI (P 

Isabelle Bloch - One of the best experts on this subject based on the ideXlab platform.

  • segmentation of fetal envelope from 3d ultrasound images based on Pixel Intensity statistical distribution and shape priors
    International Symposium on Biomedical Imaging, 2013
    Co-Authors: Sonia Dahdouh, Antoine Serrurier, Gilles Grange, Elsa D Angelini, Isabelle Bloch
    Abstract:

    This paper presents a novel shape-guided variational segmentation method for extracting the fetus envelope on 3D obstetric ultrasound images. Indeed, due to the inherent low quality of these images, classical segmentation methods tend to fail at segmenting these data. To compensate for the lack of contrast and of explicit boundaries, we introduce a segmentation framework that combines three different types of information: Pixel Intensity distribution, shape prior on the fetal envelope and a back model varying with fetus age. The Intensity distributions, different for each tissue, and the shape prior, encoded with Legendre moments, are added as energy terms in the functional to be optimized. The back model is used in a post-processing step. Results on 3D ultrasound data are presented and compared to a set of manual segmentations. Both visual and quantitative comparisons show the satisfactory results obtained by this method on the tested data.

  • ISBI - Segmentation of fetal envelope from 3D ultrasound images based on Pixel Intensity statistical distribution and shape priors
    2013 IEEE 10th International Symposium on Biomedical Imaging, 2013
    Co-Authors: Sonia Dahdouh, Antoine Serrurier, Gilles Grange, Elsa D Angelini, Isabelle Bloch
    Abstract:

    This paper presents a novel shape-guided variational segmentation method for extracting the fetus envelope on 3D obstetric ultrasound images. Indeed, due to the inherent low quality of these images, classical segmentation methods tend to fail at segmenting these data. To compensate for the lack of contrast and of explicit boundaries, we introduce a segmentation framework that combines three different types of information: Pixel Intensity distribution, shape prior on the fetal envelope and a back model varying with fetus age. The Intensity distributions, different for each tissue, and the shape prior, encoded with Legendre moments, are added as energy terms in the functional to be optimized. The back model is used in a post-processing step. Results on 3D ultrasound data are presented and compared to a set of manual segmentations. Both visual and quantitative comparisons show the satisfactory results obtained by this method on the tested data.