Segmentation

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 476295 Experts worldwide ranked by ideXlab platform

Michal Byra - One of the best experts on this subject based on the ideXlab platform.

  • knee menisci Segmentation and relaxometry of 3d ultrashort echo time cones mr imaging using attention u net with transfer learning
    Magnetic Resonance in Medicine, 2020
    Co-Authors: Michal Byra, Xiaodong Zhang, Hyungseok Jang, Eric Y Chang, Sameer B Shah
    Abstract:

    Author(s): Byra, Michal; Wu, Mei; Zhang, Xiaodong; Jang, Hyungseok; Ma, Ya-Jun; Chang, Eric Y; Shah, Sameer; Du, Jiang | Abstract: PurposeTo develop a deep learning-based method for knee menisci Segmentation in 3D ultrashort echo time (UTE) cones MR imaging, and to automatically determine MR relaxation times, namely the T1, T1ρ , and T2∗ parameters, which can be used to assess knee osteoarthritis (OA).MethodsWhole knee joint imaging was performed using 3D UTE cones sequences to collect data from 61 human subjects. Regions of interest (ROIs) were outlined by 2 experienced radiologists based on subtracted T1ρ -weighted MR images. Transfer learning was applied to develop 2D attention U-Net convolutional neural networks for the menisci Segmentation based on each radiologist's ROIs separately. Dice scores were calculated to assess Segmentation performance. Next, the T1, T1ρ , T2∗ relaxations, and ROI areas were determined for the manual and automatic Segmentations, then compared.ResultsThe models developed using ROIs provided by 2 radiologists achieved high Dice scores of 0.860 and 0.833, while the radiologists' manual Segmentations achieved a Dice score of 0.820. Linear correlation coefficients for the T1, T1ρ , and T2∗ relaxations calculated using the automatic and manual Segmentations ranged between 0.90 and 0.97, and there were no associated differences between the estimated average meniscal relaxation parameters. The deep learning models achieved Segmentation performance equivalent to the inter-observer variability of 2 radiologists.ConclusionThe proposed deep learning-based approach can be used to efficiently generate automatic Segmentations and determine meniscal relaxations times. The method has the potential to help radiologists with the assessment of meniscal diseases, such as OA.

Sameer B Shah - One of the best experts on this subject based on the ideXlab platform.

  • knee menisci Segmentation and relaxometry of 3d ultrashort echo time cones mr imaging using attention u net with transfer learning
    Magnetic Resonance in Medicine, 2020
    Co-Authors: Michal Byra, Xiaodong Zhang, Hyungseok Jang, Eric Y Chang, Sameer B Shah
    Abstract:

    Author(s): Byra, Michal; Wu, Mei; Zhang, Xiaodong; Jang, Hyungseok; Ma, Ya-Jun; Chang, Eric Y; Shah, Sameer; Du, Jiang | Abstract: PurposeTo develop a deep learning-based method for knee menisci Segmentation in 3D ultrashort echo time (UTE) cones MR imaging, and to automatically determine MR relaxation times, namely the T1, T1ρ , and T2∗ parameters, which can be used to assess knee osteoarthritis (OA).MethodsWhole knee joint imaging was performed using 3D UTE cones sequences to collect data from 61 human subjects. Regions of interest (ROIs) were outlined by 2 experienced radiologists based on subtracted T1ρ -weighted MR images. Transfer learning was applied to develop 2D attention U-Net convolutional neural networks for the menisci Segmentation based on each radiologist's ROIs separately. Dice scores were calculated to assess Segmentation performance. Next, the T1, T1ρ , T2∗ relaxations, and ROI areas were determined for the manual and automatic Segmentations, then compared.ResultsThe models developed using ROIs provided by 2 radiologists achieved high Dice scores of 0.860 and 0.833, while the radiologists' manual Segmentations achieved a Dice score of 0.820. Linear correlation coefficients for the T1, T1ρ , and T2∗ relaxations calculated using the automatic and manual Segmentations ranged between 0.90 and 0.97, and there were no associated differences between the estimated average meniscal relaxation parameters. The deep learning models achieved Segmentation performance equivalent to the inter-observer variability of 2 radiologists.ConclusionThe proposed deep learning-based approach can be used to efficiently generate automatic Segmentations and determine meniscal relaxations times. The method has the potential to help radiologists with the assessment of meniscal diseases, such as OA.

Nay Aung - One of the best experts on this subject based on the ideXlab platform.

  • automated quality control in image Segmentation application to the uk biobank cardiovascular magnetic resonance imaging study
    Journal of Cardiovascular Magnetic Resonance, 2019
    Co-Authors: Robert Robinson, Ozan Oktay, Wenjia Bai, Vanya V Valindria, Mihir M Sanghvi, Nay Aung, Bernhard Kainz, Hideaki Suzuki
    Abstract:

    The trend towards large-scale studies including population imaging poses new challenges in terms of quality control (QC). This is a particular issue when automatic processing tools such as image Segmentation methods are employed to derive quantitative measures or biomarkers for further analyses. Manual inspection and visual QC of each Segmentation result is not feasible at large scale. However, it is important to be able to automatically detect when a Segmentation method fails in order to avoid inclusion of wrong measurements into subsequent analyses which could otherwise lead to incorrect conclusions. To overcome this challenge, we explore an approach for predicting Segmentation quality based on Reverse Classification Accuracy, which enables us to discriminate between successful and failed Segmentations on a per-cases basis. We validate this approach on a new, large-scale manually-annotated set of 4800 cardiovascular magnetic resonance (CMR) scans. We then apply our method to a large cohort of 7250 CMR on which we have performed manual QC. We report results used for predicting Segmentation quality metrics including Dice Similarity Coefficient (DSC) and surface-distance measures. As initial validation, we present data for 400 scans demonstrating 99% accuracy for classifying low and high quality Segmentations using the predicted DSC scores. As further validation we show high correlation between real and predicted scores and 95% classification accuracy on 4800 scans for which manual Segmentations were available. We mimic real-world application of the method on 7250 CMR where we show good agreement between predicted quality metrics and manual visual QC scores. We show that Reverse classification accuracy has the potential for accurate and fully automatic Segmentation QC on a per-case basis in the context of large-scale population imaging as in the UK Biobank Imaging Study.

  • automated quality control in image Segmentation application to the uk biobank cardiac mr imaging study
    arXiv: Computer Vision and Pattern Recognition, 2019
    Co-Authors: Robert Robinson, Ozan Oktay, Wenjia Bai, Vanya V Valindria, Mihir M Sanghvi, Nay Aung, Jose Miguel Paiva, Bernhard Kainz, Hideaki Suzuki, Filip Zemrak
    Abstract:

    Background: The trend towards large-scale studies including population imaging poses new challenges in terms of quality control (QC). This is a particular issue when automatic processing tools, e.g. image Segmentation methods, are employed to derive quantitative measures or biomarkers for later analyses. Manual inspection and visual QC of each Segmentation isn't feasible at large scale. However, it's important to be able to automatically detect when a Segmentation method fails so as to avoid inclusion of wrong measurements into subsequent analyses which could lead to incorrect conclusions. Methods: To overcome this challenge, we explore an approach for predicting Segmentation quality based on Reverse Classification Accuracy, which enables us to discriminate between successful and failed Segmentations on a per-cases basis. We validate this approach on a new, large-scale manually-annotated set of 4,800 cardiac magnetic resonance scans. We then apply our method to a large cohort of 7,250 cardiac MRI on which we have performed manual QC. Results: We report results used for predicting Segmentation quality metrics including Dice Similarity Coefficient (DSC) and surface-distance measures. As initial validation, we present data for 400 scans demonstrating 99% accuracy for classifying low and high quality Segmentations using predicted DSC scores. As further validation we show high correlation between real and predicted scores and 95% classification accuracy on 4,800 scans for which manual Segmentations were available. We mimic real-world application of the method on 7,250 cardiac MRI where we show good agreement between predicted quality metrics and manual visual QC scores. Conclusions: We show that RCA has the potential for accurate and fully automatic Segmentation QC on a per-case basis in the context of large-scale population imaging as in the UK Biobank Imaging Study.

Elin Tragardh - One of the best experts on this subject based on the ideXlab platform.

  • deep learning for Segmentation of 49 selected bones in ct scans first step in automated pet ct based 3d quantification of skeletal metastases
    European Journal of Radiology, 2019
    Co-Authors: Sarah Lindgren Belal, May Sadik, Reza Kaboteh, Olof Enqvist, Johannes Ulen, Mads Hvid Poulsen, Jane Angel Simonsen, Poul Flemming Hoilundcarlsen, Lars Edenbrandt, Elin Tragardh
    Abstract:

    Abstract Purpose The aim of this study was to develop a deep learning-based method for Segmentation of bones in CT scans and test its accuracy compared to manual delineation, as a first step in the creation of an automated PET/CT-based method for quantifying skeletal tumour burden. Methods Convolutional neural networks (CNNs) were trained to segment 49 bones using manual Segmentations from 100 CT scans. After training, the CNN-based Segmentation method was tested on 46 patients with prostate cancer, who had undergone 18F-choline-PET/CT and 18F-NaF PET/CT less than three weeks apart. Bone volumes were calculated from the Segmentations. The network’s performance was compared with manual Segmentations of five bones made by an experienced physician. Accuracy of the spatial overlap between automated CNN-based and manual Segmentations of these five bones was assessed using the Sorensen-Dice index (SDI). Reproducibility was evaluated applying the Bland-Altman method. Results The median (SD) volumes of the five selected bones were by CNN and manual Segmentation: Th7 41 (3.8) and 36 (5.1), L3 76 (13) and 75 (9.2), sacrum 284 (40) and 283 (26), 7th rib 33 (3.9) and 31 (4.8), sternum 80 (11) and 72 (9.2), respectively. Median SDIs were 0.86 (Th7), 0.85 (L3), 0.88 (sacrum), 0.84 (7th rib) and 0.83 (sternum). The intraobserver volume difference was less with CNN-based than manual approach: Th7 2% and 14%, L3 7% and 8%, sacrum 1% and 3%, 7th rib 1% and 6%, sternum 3% and 5%, respectively. The average volume difference measured as ratio volume difference/mean volume between the two CNN-based Segmentations was 5–6% for the vertebral column and ribs and ≤3% for other bones. Conclusion The new deep learning-based method for automated Segmentation of bones in CT scans provided highly accurate bone volumes in a fast and automated way and, thus, appears to be a valuable first step in the development of a clinical useful processing procedure providing reliable skeletal Segmentation as a key part of quantification of skeletal metastases.

Martial Hebert - One of the best experts on this subject based on the ideXlab platform.

  • estimating object region from local contour configuration
    Computer Vision and Pattern Recognition, 2009
    Co-Authors: Tetsuaki Suzuki, Martial Hebert
    Abstract:

    In this paper, we explore ways to combine boundary information and region Segmentation to estimate regions corresponding to foreground objects. Boundary information is used to generate an object likelihood image which encodes the likelihood that each pixel belongs to a foreground object. This is done by combining evidence gathered from a large number of boundary fragments on training images by exploiting the relation between local boundary shape and relative location of the corresponding object region in the image. A region Segmentation is used to generate a likely Segmentation that is consistent with the boundary fragments out of a set of multiple Segmentations. A mutual information criterion is used for selecting a Segmentation from a set of multiple Segmentations. Object likelihood and region Segmentation are combined to yield the final proposed object region(s).

  • object recognition by integrating multiple image Segmentations
    European Conference on Computer Vision, 2008
    Co-Authors: Caroline Pantofaru, Cordelia Schmid, Martial Hebert
    Abstract:

    The joint tasks of object recognition and object Segmentation from a single image are complex in their requirement of not only correct classification, but also deciding exactly which pixels belong to the object. Exploring all possible pixel subsets is prohibitively expensive, leading to recent approaches which use unsupervised image Segmentation to reduce the size of the configuration space. Image Segmentation, however, is known to be unstable, strongly affected by small image perturbations, feature choices, or different Segmentation algorithms. This instability has led to advocacy for using multiple Segmentations of an image. In this paper, we explore the question of how to best integrate the information from multiple bottom-up Segmentations of an image to improve object recognition robustness. By integrating the image partition hypotheses in an intuitive combined top-down and bottom-up recognition approach, we improve object and feature support. We further explore possible extensions of our method and whether they provide improved performance. Results are presented on the MSRC 21-class data set and the Pascal VOC2007 object Segmentation challenge.

  • toward objective evaluation of image Segmentation algorithms
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007
    Co-Authors: Ranjith Unnikrishnan, Caroline Pantofaru, Martial Hebert
    Abstract:

    Unsupervised image Segmentation is an important component in many image understanding algorithms and practical vision systems. However, evaluation of Segmentation algorithms thus far has been largely subjective, leaving a system designer to judge the effectiveness of a technique based only on intuition and results in the form of a few example segmented images. This is largely due to image Segmentation being an ill-defined problem-there is no unique ground-truth Segmentation of an image against which the output of an algorithm may be compared. This paper demonstrates how a recently proposed measure of similarity, the normalized probabilistic rand (NPR) index, can be used to perform a quantitative comparison between image Segmentation algorithms using a hand-labeled set of ground-truth Segmentations. We show that the measure allows principled comparisons between Segmentations created by different algorithms, as well as Segmentations on different images. We outline a procedure for algorithm evaluation through an example evaluation of some familiar algorithms - the mean-shift-based algorithm, an efficient graph-based Segmentation algorithm, a hybrid algorithm that combines the strengths of both methods, and expectation maximization. Results are presented on the 300 images in the publicly available Berkeley Segmentation data set