Shape Modeling

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 91206 Experts worldwide ranked by ideXlab platform

Lalande Alain - One of the best experts on this subject based on the ideXlab platform.

  • Deep Generative Model-Driven Multimodal Prostate Segmentation in Radiotherapy
    'Springer Science and Business Media LLC', 2019
    Co-Authors: Girum, Kibrom Berihu, Créhange Gilles, Hussain Raabid, Walker, Paul Michael, Lalande Alain
    Abstract:

    International audienceDeep learning has shown unprecedented success in a variety of applications, such as computer vision and medical image analysis. However, there is still potential to improve segmentation in multimodal images by embedding prior knowledge via learning-based Shape Modeling and registration to learn the modality invariant anatomical structure of organs. For example, in radiotherapy automatic prostate segmentation is essential in prostate cancer diagnosis, therapy, and post-therapy assessment from T2-weighted MR or CT images. In this paper, we present a fully automatic deep generative model-driven multimodal prostate segmentation method using convolutional neural network (DGMNet). The novelty of our method comes with its embedded generative neural network for learning-based Shape Modeling and its ability to adapt for different imaging modalities via learning-based registration. The proposed method includes a multi-task learning framework that combines a convolutional feature extraction and an embedded regression and classification based Shape Modeling. This enables the network to predict the deformable Shape of an organ. We show that generative neural network-based Shape Modeling trained on a reliable contrast imaging modality (such as MRI) can be directly applied to low contrast imaging modality (such as CT) to achieve accurate prostate segmentation. The method was evaluated on MRI and CT datasets acquired from different clinical centers with large variations in contrast and scanning protocols. Experimental results reveal that our method can be used to automatically and accurately segment the prostate gland in different imaging modalities

  • Deep generative model-driven multimodal prostate segmentation in radiotherapy
    'Springer Science and Business Media LLC', 2019
    Co-Authors: Girum, Kibrom Berihu, Créhange Gilles, Hussain Raabid, Walker, Paul Michael, Lalande Alain
    Abstract:

    Deep learning has shown unprecedented success in a variety of applications, such as computer vision and medical image analysis. However, there is still potential to improve segmentation in multimodal images by embedding prior knowledge via learning-based Shape Modeling and registration to learn the modality invariant anatomical structure of organs. For example, in radiotherapy automatic prostate segmentation is essential in prostate cancer diagnosis, therapy, and post-therapy assessment from T2-weighted MR or CT images. In this paper, we present a fully automatic deep generative model-driven multimodal prostate segmentation method using convolutional neural network (DGMNet). The novelty of our method comes with its embedded generative neural network for learning-based Shape Modeling and its ability to adapt for different imaging modalities via learning-based registration. The proposed method includes a multi-task learning framework that combines a convolutional feature extraction and an embedded regression and classification based Shape Modeling. This enables the network to predict the deformable Shape of an organ. We show that generative neural networkbased Shape Modeling trained on a reliable contrast imaging modality (such as MRI) can be directly applied to low contrast imaging modality (such as CT) to achieve accurate prostate segmentation. The method was evaluated on MRI and CT datasets acquired from different clinical centers with large variations in contrast and scanning protocols. Experimental results reveal that our method can be used to automatically and accurately segment the prostate gland in different imaging modalities.Comment: 8 pages, camera ready paper, accepted for Artificial Intelligence in Radiation Therapy (AIRT), in conjunction with MICCAI 201

Yongmin Kim - One of the best experts on this subject based on the ideXlab platform.

  • parametric Shape Modeling using deformable superellipses for prostate segmentation
    IEEE Transactions on Medical Imaging, 2004
    Co-Authors: Lixin Gong, Sayan Dev Pathak, David R Haynor, Paul S Cho, Yongmin Kim
    Abstract:

    Automatic prostate segmentation in ultrasound images is a challenging task due to speckle noise, missing boundary segments, and complex prostate anatomy. One popular approach has been the use of deformable models. For such techniques, prior knowledge of the prostate Shape plays an important role in automating model initialization and constraining model evolution. In this paper, we have modeled the prostate Shape using deformable superellipses. This model was fitted to 594 manual prostate contours outlined by five experts. We found that the superellipse with simple parametric deformations can efficiently model the prostate Shape with the Hausdorff distance error (model versus manual outline) of 1.32/spl plusmn/0.62 mm and mean absolute distance error of 0.54/spl plusmn/0.20 mm. The variability between the manual outlinings and their corresponding fitted deformable superellipses was significantly less than the variability between human experts with p-value being less than 0.0001. Based on this deformable superellipse model, we have developed an efficient and robust Bayesian segmentation algorithm. This algorithm was applied to 125 prostate ultrasound images collected from 16 patients. The mean error between the computer-generated boundaries and the manual outlinings was 1.36/spl plusmn/0.58 mm, which is significantly less than the manual interobserver distances. The algorithm was also shown to be fairly insensitive to the choice of the initial curve.

Girum, Kibrom Berihu - One of the best experts on this subject based on the ideXlab platform.

  • Deep Generative Model-Driven Multimodal Prostate Segmentation in Radiotherapy
    'Springer Science and Business Media LLC', 2019
    Co-Authors: Girum, Kibrom Berihu, Créhange Gilles, Hussain Raabid, Walker, Paul Michael, Lalande Alain
    Abstract:

    International audienceDeep learning has shown unprecedented success in a variety of applications, such as computer vision and medical image analysis. However, there is still potential to improve segmentation in multimodal images by embedding prior knowledge via learning-based Shape Modeling and registration to learn the modality invariant anatomical structure of organs. For example, in radiotherapy automatic prostate segmentation is essential in prostate cancer diagnosis, therapy, and post-therapy assessment from T2-weighted MR or CT images. In this paper, we present a fully automatic deep generative model-driven multimodal prostate segmentation method using convolutional neural network (DGMNet). The novelty of our method comes with its embedded generative neural network for learning-based Shape Modeling and its ability to adapt for different imaging modalities via learning-based registration. The proposed method includes a multi-task learning framework that combines a convolutional feature extraction and an embedded regression and classification based Shape Modeling. This enables the network to predict the deformable Shape of an organ. We show that generative neural network-based Shape Modeling trained on a reliable contrast imaging modality (such as MRI) can be directly applied to low contrast imaging modality (such as CT) to achieve accurate prostate segmentation. The method was evaluated on MRI and CT datasets acquired from different clinical centers with large variations in contrast and scanning protocols. Experimental results reveal that our method can be used to automatically and accurately segment the prostate gland in different imaging modalities

  • Deep generative model-driven multimodal prostate segmentation in radiotherapy
    'Springer Science and Business Media LLC', 2019
    Co-Authors: Girum, Kibrom Berihu, Créhange Gilles, Hussain Raabid, Walker, Paul Michael, Lalande Alain
    Abstract:

    Deep learning has shown unprecedented success in a variety of applications, such as computer vision and medical image analysis. However, there is still potential to improve segmentation in multimodal images by embedding prior knowledge via learning-based Shape Modeling and registration to learn the modality invariant anatomical structure of organs. For example, in radiotherapy automatic prostate segmentation is essential in prostate cancer diagnosis, therapy, and post-therapy assessment from T2-weighted MR or CT images. In this paper, we present a fully automatic deep generative model-driven multimodal prostate segmentation method using convolutional neural network (DGMNet). The novelty of our method comes with its embedded generative neural network for learning-based Shape Modeling and its ability to adapt for different imaging modalities via learning-based registration. The proposed method includes a multi-task learning framework that combines a convolutional feature extraction and an embedded regression and classification based Shape Modeling. This enables the network to predict the deformable Shape of an organ. We show that generative neural networkbased Shape Modeling trained on a reliable contrast imaging modality (such as MRI) can be directly applied to low contrast imaging modality (such as CT) to achieve accurate prostate segmentation. The method was evaluated on MRI and CT datasets acquired from different clinical centers with large variations in contrast and scanning protocols. Experimental results reveal that our method can be used to automatically and accurately segment the prostate gland in different imaging modalities.Comment: 8 pages, camera ready paper, accepted for Artificial Intelligence in Radiation Therapy (AIRT), in conjunction with MICCAI 201

Lixin Gong - One of the best experts on this subject based on the ideXlab platform.

  • parametric Shape Modeling using deformable superellipses for prostate segmentation
    IEEE Transactions on Medical Imaging, 2004
    Co-Authors: Lixin Gong, Sayan Dev Pathak, David R Haynor, Paul S Cho, Yongmin Kim
    Abstract:

    Automatic prostate segmentation in ultrasound images is a challenging task due to speckle noise, missing boundary segments, and complex prostate anatomy. One popular approach has been the use of deformable models. For such techniques, prior knowledge of the prostate Shape plays an important role in automating model initialization and constraining model evolution. In this paper, we have modeled the prostate Shape using deformable superellipses. This model was fitted to 594 manual prostate contours outlined by five experts. We found that the superellipse with simple parametric deformations can efficiently model the prostate Shape with the Hausdorff distance error (model versus manual outline) of 1.32/spl plusmn/0.62 mm and mean absolute distance error of 0.54/spl plusmn/0.20 mm. The variability between the manual outlinings and their corresponding fitted deformable superellipses was significantly less than the variability between human experts with p-value being less than 0.0001. Based on this deformable superellipse model, we have developed an efficient and robust Bayesian segmentation algorithm. This algorithm was applied to 125 prostate ultrasound images collected from 16 patients. The mean error between the computer-generated boundaries and the manual outlinings was 1.36/spl plusmn/0.58 mm, which is significantly less than the manual interobserver distances. The algorithm was also shown to be fairly insensitive to the choice of the initial curve.

D'hooge Jan - One of the best experts on this subject based on the ideXlab platform.

  • Statistical Shape Modeling of the left ventricle: myocardial infarct classification challenge
    'Institute of Electrical and Electronics Engineers (IEEE)', 2018
    Co-Authors: Suinesiaputra Avan, Ablin Pierre, Alba Xenia, Alessandrini Martino, Allen Jack, Bai Wenjia, Cimen Serkan, Claes Peter, Cowan Brett, D'hooge Jan
    Abstract:

    Statistical Shape Modeling is a powerful tool for visualizing and quantifying geometric and functional patterns of the heart. After myocardial infarction (MI), the left ventricle typically remodels in response to physiological challenges. Several methods have been proposed in the literature to describe statistical Shape changes. Which method best characterizes left ventricular reModeling after MI is an open research question. A better descriptor of reModeling is expected to provide a more accurate evaluation of disease status in MI patients. We therefore designed a challenge to test Shape characterization in MI given a set of three-dimensional left ventricular surface points. The training set comprised 100 MI patients, and 100 asymptomatic volunteers (AV). The challenge was initiated in 2015 at the Statistical Atlases and Computational Models of the Heart workshop, in conjunction with the MICCAI conference. The training set with labels was provided to participants, who were asked to submit the likelihood of MI from a different (validation) set of 200 cases (100 AV and 100 MI). Sensitivity, specificity, accuracy and area under the receiver operating characteristic curve were used as the outcome measures. The goals of this challenge were to (1) establish a common dataset for evaluating statistical Shape Modeling algorithms in MI, and (2) test whether statistical Shape Modeling provides additional information characterizing MI patients over standard clinical measures. Eleven groups with a wide variety of classification and feature extraction approaches participated in this challenge. All methods achieved excellent classification results with accuracy ranges from 0.83 to 0.98. The areas under the receiver operating characteristic curves were all above 0.90. Four methods showed significantly higher performance than standard clinical measures. The dataset and software for evaluation are available from the Cardiac Atlas Project website1.Suinesiaputra A., Ablin P., Alba X., Alessandrini M., Allen J., Bai W., Cimen S., Claes P., Cowan B.R., D’hooge J., Duchateau N., Ehrhardt J., Frangi A.F., Gooya A., Grau V., Lekadir K., Lu A., Mukhopadhyay A., Oksuz I., Parajuli N., Pennec X., Pereanez M., Pinto C., Piras P., Rohé M.-M., Rueckert D., Säring D., Sermesant M., Siddiqi K., Tabassian M., Teresi L., Tsaftaris S.A., Wilms M., Young A.A., Zhang X., Medrano-Gracia P., ''Statistical Shape Modeling of the left ventricle: myocardial infarct classification challenge'', IEEE journal of biomedical and health informatics, vol. 22, no. 2, pp. 503-515, March 2018.status: publishe

  • Statistical Shape Modeling of the left ventricle: myocardial infarct classification challenge
    'Institute of Electrical and Electronics Engineers (IEEE)', 2018
    Co-Authors: Suinesiaputra Avan, Ablin Pierre, Alba Xenia, Alessandrini Martino, Allen Jack, Bai Wenjia, Cimen Serkan, Claes Peter, Cowan Brett, D'hooge Jan
    Abstract:

    International audienceStatistical Shape Modeling is a powerful tool for visualizing and quantifying geometric and functional patterns of the heart. After myocardial infarction (MI), the left ventricle typically remodels in response to physiological challenges. Several methods have been proposed in the literature to describe statistical Shape changes. Which method best characterizes left ventricular reModeling after MI is an open research question. A better descriptor of reModeling is expected to provide a more accurate evaluation of disease status in MI patients. We therefore designed a challenge to test Shape characterization in MI given a set of three-dimensional left ventricular surface points. The training set comprised 100 MI patients, and 100 asymptomatic volunteers (AV). The challenge was initiated in 2015 at the Statistical Atlases and Computational Models of the Heart workshop, in conjunction with the MICCAI conference. The training set with labels was provided to participants, who were asked to submit the likelihood of MI from a different (validation) set of 200 cases (100 AV and 100 MI). Sensitivity, specificity, accuracy and area under the receiver operating characteristic curve were used as the outcome measures. The goals of this challenge were to (1) establish a common dataset Manuscript for evaluating statistical Shape Modeling algorithms in MI, and (2) test whether statistical Shape Modeling provides additional information characterizing MI patients over standard clinical measures. Eleven groups with a wide variety of classification and feature extraction approaches participated in this challenge. All methods achieved excellent classification results with accuracy ranges from 0.83 to 0.98. The areas under the receiver operating characteristic curves were all above 0.90. Four methods showed significantly higher performance than standard clinical measures. The dataset and software for evaluation are available from the Cardiac Atlas Project website