Labeled Example

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 19668 Experts worldwide ranked by ideXlab platform

Zhihua Zhou - One of the best experts on this subject based on the ideXlab platform.

  • tri training exploiting unLabeled data using three classifiers
    IEEE Transactions on Knowledge and Data Engineering, 2005
    Co-Authors: Zhihua Zhou
    Abstract:

    In many practical data mining applications, such as Web page classification, unLabeled training Examples are readily available, but Labeled ones are fairly expensive to obtain. Therefore, semi-supervised learning algorithms such as co-training have attracted much attention. In this paper, a new co-training style semi-supervised learning algorithm, named tri-training, is proposed. This algorithm generates three classifiers from the original Labeled Example set. These classifiers are then refined using unLabeled Examples in the tri-training process. In detail, in each round of tri-training, an unLabeled Example is Labeled for a classifier if the other two classifiers agree on the labeling, under certain conditions. Since tri-training neither requires the instance space to be described with sufficient and redundant views nor does it put any constraints on the supervised learning algorithm, its applicability is broader than that of previous co-training style algorithms. Experiments on UCI data sets and application to the Web page classification task indicate that tri-training can effectively exploit unLabeled data to enhance the learning performance.

Adrian V Dalca - One of the best experts on this subject based on the ideXlab platform.

  • data augmentation using learned transformations for one shot medical image segmentation
    Computer Vision and Pattern Recognition, 2019
    Co-Authors: Amy Zhao, Guha Balakrishnan, Fredo Durand, John V Guttag, Adrian V Dalca
    Abstract:

    Image segmentation is an important task in many medical applications. Methods based on convolutional neural networks attain state-of-the-art accuracy; however, they typically rely on supervised training with large Labeled datasets. Labeling medical images requires significant expertise and time, and typical hand-tuned approaches for data augmentation fail to capture the complex variations in such images. We present an automated data augmentation method for synthesizing Labeled medical images. We demonstrate our method on the task of segmenting magnetic resonance imaging (MRI) brain scans. Our method requires only a single segmented scan, and leverages other unLabeled scans in a semi-supervised approach. We learn a model of transformations from the images, and use the model along with the Labeled Example to synthesize additional Labeled Examples. Each transformation is comprised of a spatial deformation field and an intensity change, enabling the synthesis of complex effects such as variations in anatomy and image acquisition procedures. We show that training a supervised segmenter with these new Examples provides significant improvements over state-of-the-art methods for one-shot biomedical image segmentation.

  • data augmentation using learned transformations for one shot medical image segmentation
    arXiv: Computer Vision and Pattern Recognition, 2019
    Co-Authors: Amy Zhao, Guha Balakrishnan, Fredo Durand, John V Guttag, Adrian V Dalca
    Abstract:

    Image segmentation is an important task in many medical applications. Methods based on convolutional neural networks attain state-of-the-art accuracy; however, they typically rely on supervised training with large Labeled datasets. Labeling medical images requires significant expertise and time, and typical hand-tuned approaches for data augmentation fail to capture the complex variations in such images. We present an automated data augmentation method for synthesizing Labeled medical images. We demonstrate our method on the task of segmenting magnetic resonance imaging (MRI) brain scans. Our method requires only a single segmented scan, and leverages other unLabeled scans in a semi-supervised approach. We learn a model of transformations from the images, and use the model along with the Labeled Example to synthesize additional Labeled Examples. Each transformation is comprised of a spatial deformation field and an intensity change, enabling the synthesis of complex effects such as variations in anatomy and image acquisition procedures. We show that training a supervised segmenter with these new Examples provides significant improvements over state-of-the-art methods for one-shot biomedical image segmentation. Our code is available at this https URL.

Amy Zhao - One of the best experts on this subject based on the ideXlab platform.

  • data augmentation using learned transformations for one shot medical image segmentation
    Computer Vision and Pattern Recognition, 2019
    Co-Authors: Amy Zhao, Guha Balakrishnan, Fredo Durand, John V Guttag, Adrian V Dalca
    Abstract:

    Image segmentation is an important task in many medical applications. Methods based on convolutional neural networks attain state-of-the-art accuracy; however, they typically rely on supervised training with large Labeled datasets. Labeling medical images requires significant expertise and time, and typical hand-tuned approaches for data augmentation fail to capture the complex variations in such images. We present an automated data augmentation method for synthesizing Labeled medical images. We demonstrate our method on the task of segmenting magnetic resonance imaging (MRI) brain scans. Our method requires only a single segmented scan, and leverages other unLabeled scans in a semi-supervised approach. We learn a model of transformations from the images, and use the model along with the Labeled Example to synthesize additional Labeled Examples. Each transformation is comprised of a spatial deformation field and an intensity change, enabling the synthesis of complex effects such as variations in anatomy and image acquisition procedures. We show that training a supervised segmenter with these new Examples provides significant improvements over state-of-the-art methods for one-shot biomedical image segmentation.

  • data augmentation using learned transformations for one shot medical image segmentation
    arXiv: Computer Vision and Pattern Recognition, 2019
    Co-Authors: Amy Zhao, Guha Balakrishnan, Fredo Durand, John V Guttag, Adrian V Dalca
    Abstract:

    Image segmentation is an important task in many medical applications. Methods based on convolutional neural networks attain state-of-the-art accuracy; however, they typically rely on supervised training with large Labeled datasets. Labeling medical images requires significant expertise and time, and typical hand-tuned approaches for data augmentation fail to capture the complex variations in such images. We present an automated data augmentation method for synthesizing Labeled medical images. We demonstrate our method on the task of segmenting magnetic resonance imaging (MRI) brain scans. Our method requires only a single segmented scan, and leverages other unLabeled scans in a semi-supervised approach. We learn a model of transformations from the images, and use the model along with the Labeled Example to synthesize additional Labeled Examples. Each transformation is comprised of a spatial deformation field and an intensity change, enabling the synthesis of complex effects such as variations in anatomy and image acquisition procedures. We show that training a supervised segmenter with these new Examples provides significant improvements over state-of-the-art methods for one-shot biomedical image segmentation. Our code is available at this https URL.

Dalca Adrian - One of the best experts on this subject based on the ideXlab platform.

  • Data augmentation using learned transformations for one-shot medical image segmentation
    2019
    Co-Authors: Zhao Amy, Balakrishnan Guha, Durand Frédo, Guttag John, Dalca Adrian
    Abstract:

    Image segmentation is an important task in many medical applications. Methods based on convolutional neural networks attain state-of-the-art accuracy; however, they typically rely on supervised training with large Labeled datasets. Labeling medical images requires significant expertise and time, and typical hand-tuned approaches for data augmentation fail to capture the complex variations in such images. We present an automated data augmentation method for synthesizing Labeled medical images. We demonstrate our method on the task of segmenting magnetic resonance imaging (MRI) brain scans. Our method requires only a single segmented scan, and leverages other unLabeled scans in a semi-supervised approach. We learn a model of transformations from the images, and use the model along with the Labeled Example to synthesize additional Labeled Examples. Each transformation is comprised of a spatial deformation field and an intensity change, enabling the synthesis of complex effects such as variations in anatomy and image acquisition procedures. We show that training a supervised segmenter with these new Examples provides significant improvements over state-of-the-art methods for one-shot biomedical image segmentation. Our code is available at https://github.com/xamyzhao/brainstorm.Comment: 9 pages, CVPR 201

Guha Balakrishnan - One of the best experts on this subject based on the ideXlab platform.

  • data augmentation using learned transformations for one shot medical image segmentation
    Computer Vision and Pattern Recognition, 2019
    Co-Authors: Amy Zhao, Guha Balakrishnan, Fredo Durand, John V Guttag, Adrian V Dalca
    Abstract:

    Image segmentation is an important task in many medical applications. Methods based on convolutional neural networks attain state-of-the-art accuracy; however, they typically rely on supervised training with large Labeled datasets. Labeling medical images requires significant expertise and time, and typical hand-tuned approaches for data augmentation fail to capture the complex variations in such images. We present an automated data augmentation method for synthesizing Labeled medical images. We demonstrate our method on the task of segmenting magnetic resonance imaging (MRI) brain scans. Our method requires only a single segmented scan, and leverages other unLabeled scans in a semi-supervised approach. We learn a model of transformations from the images, and use the model along with the Labeled Example to synthesize additional Labeled Examples. Each transformation is comprised of a spatial deformation field and an intensity change, enabling the synthesis of complex effects such as variations in anatomy and image acquisition procedures. We show that training a supervised segmenter with these new Examples provides significant improvements over state-of-the-art methods for one-shot biomedical image segmentation.

  • data augmentation using learned transformations for one shot medical image segmentation
    arXiv: Computer Vision and Pattern Recognition, 2019
    Co-Authors: Amy Zhao, Guha Balakrishnan, Fredo Durand, John V Guttag, Adrian V Dalca
    Abstract:

    Image segmentation is an important task in many medical applications. Methods based on convolutional neural networks attain state-of-the-art accuracy; however, they typically rely on supervised training with large Labeled datasets. Labeling medical images requires significant expertise and time, and typical hand-tuned approaches for data augmentation fail to capture the complex variations in such images. We present an automated data augmentation method for synthesizing Labeled medical images. We demonstrate our method on the task of segmenting magnetic resonance imaging (MRI) brain scans. Our method requires only a single segmented scan, and leverages other unLabeled scans in a semi-supervised approach. We learn a model of transformations from the images, and use the model along with the Labeled Example to synthesize additional Labeled Examples. Each transformation is comprised of a spatial deformation field and an intensity change, enabling the synthesis of complex effects such as variations in anatomy and image acquisition procedures. We show that training a supervised segmenter with these new Examples provides significant improvements over state-of-the-art methods for one-shot biomedical image segmentation. Our code is available at this https URL.