Deep Learning Model

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 102723 Experts worldwide ranked by ideXlab platform

D K Gardner - One of the best experts on this subject based on the ideXlab platform.

  • Deep Learning as a predictive tool for fetal heart pregnancy following time lapse incubation and blastocyst transfer
    Human Reproduction, 2019
    Co-Authors: D Tran, Simon Cooke, P J Illingworth, D K Gardner
    Abstract:

    Study question Can a Deep Learning Model predict the probability of pregnancy with fetal heart (FH) from time-lapse videos? Summary answer We created a Deep Learning Model named IVY, which was an objective and fully automated system that predicts the probability of FH pregnancy directly from raw time-lapse videos without the need for any manual morphokinetic annotation or blastocyst morphology assessment. What is known already The contribution of time-lapse imaging in effective embryo selection is promising. Existing algorithms for the analysis of time-lapse imaging are based on morphology and morphokinetic parameters that require subjective human annotation and thus have intrinsic inter-reader and intra-reader variability. Deep Learning offers promise for the automation and standardization of embryo selection. Study design, size, duration A retrospective analysis of time-lapse videos and clinical outcomes of 10 638 embryos from eight different IVF clinics, across four different countries, between January 2014 and December 2018. Participants/materials, setting, methods The Deep Learning Model was trained using time-lapse videos with known FH pregnancy outcome to perform a binary classification task of predicting the probability of pregnancy with FH given time-lapse video sequence. The predictive power of the Model was measured using the average area under the curve (AUC) of the receiver operating characteristic curve over 5-fold stratified cross-validation. Main results and the role of chance The Deep Learning Model was able to predict FH pregnancy from time-lapse videos with an AUC of 0.93 [95% CI 0.92-0.94] in 5-fold stratified cross-validation. A hold-out validation test across eight laboratories showed that the AUC was reproducible, ranging from 0.95 to 0.90 across different laboratories with different culture and laboratory processes. Limitations, reasons for caution This study is a retrospective analysis demonstrating that the Deep Learning Model has a high level of predictability of the likelihood that an embryo will implant. The clinical impacts of these findings are still uncertain. Further studies, including prospective randomized controlled trials, are required to evaluate the clinical significance of this Deep Learning Model. The time-lapse videos collected for training and validation are Day 5 embryos; hence, additional adjustment would need to be made for the Model to be used in the context of Day 3 transfer. Wider implications of the findings The high predictive value for embryo implantation obtained by the Deep Learning Model may improve the effectiveness of previous approaches used for time-lapse imaging in embryo selection. This may improve the prioritization of the most viable embryo for a single embryo transfer. The Deep Learning Model may also prove to be useful in providing the optimal order for subsequent transfers of cryopreserved embryos. Study funding/competing interest(s) D.T. is the co-owner of Harrison AI that has patented this methodology in association with Virtus Health. P.I. is a shareholder in Virtus Health. S.C., P.I. and D.G. are all either employees or contracted with Virtus Health. D.G. has received grant support from Vitrolife, the manufacturer of the Embryoscope time-lapse imaging used in this study. The equipment and time for this study have been jointly provided by Harrison AI and Virtus Health.

  • Deep Learning as a predictive tool for fetal heart pregnancy following time lapse incubation and blastocyst transfer
    Human Reproduction, 2019
    Co-Authors: D Tran, Simon Cooke, P J Illingworth, D K Gardner
    Abstract:

    Study question Can a Deep Learning Model predict the probability of pregnancy with fetal heart (FH) from time-lapse videos? Summary answer We created a Deep Learning Model named IVY, which was an objective and fully automated system that predicts the probability of FH pregnancy directly from raw time-lapse videos without the need for any manual morphokinetic annotation or blastocyst morphology assessment. What is known already The contribution of time-lapse imaging in effective embryo selection is promising. Existing algorithms for the analysis of time-lapse imaging are based on morphology and morphokinetic parameters that require subjective human annotation and thus have intrinsic inter-reader and intra-reader variability. Deep Learning offers promise for the automation and standardization of embryo selection. Study design, size, duration A retrospective analysis of time-lapse videos and clinical outcomes of 10 638 embryos from eight different IVF clinics, across four different countries, between January 2014 and December 2018. Participants/materials, setting, methods The Deep Learning Model was trained using time-lapse videos with known FH pregnancy outcome to perform a binary classification task of predicting the probability of pregnancy with FH given time-lapse video sequence. The predictive power of the Model was measured using the average area under the curve (AUC) of the receiver operating characteristic curve over 5-fold stratified cross-validation. Main results and the role of chance The Deep Learning Model was able to predict FH pregnancy from time-lapse videos with an AUC of 0.93 [95% CI 0.92-0.94] in 5-fold stratified cross-validation. A hold-out validation test across eight laboratories showed that the AUC was reproducible, ranging from 0.95 to 0.90 across different laboratories with different culture and laboratory processes. Limitations, reasons for caution This study is a retrospective analysis demonstrating that the Deep Learning Model has a high level of predictability of the likelihood that an embryo will implant. The clinical impacts of these findings are still uncertain. Further studies, including prospective randomized controlled trials, are required to evaluate the clinical significance of this Deep Learning Model. The time-lapse videos collected for training and validation are Day 5 embryos; hence, additional adjustment would need to be made for the Model to be used in the context of Day 3 transfer. Wider implications of the findings The high predictive value for embryo implantation obtained by the Deep Learning Model may improve the effectiveness of previous approaches used for time-lapse imaging in embryo selection. This may improve the prioritization of the most viable embryo for a single embryo transfer. The Deep Learning Model may also prove to be useful in providing the optimal order for subsequent transfers of cryopreserved embryos. Study funding/competing interest(s) D.T. is the co-owner of Harrison AI that has patented this methodology in association with Virtus Health. P.I. is a shareholder in Virtus Health. S.C., P.I. and D.G. are all either employees or contracted with Virtus Health. D.G. has received grant support from Vitrolife, the manufacturer of the Embryoscope time-lapse imaging used in this study. The equipment and time for this study have been jointly provided by Harrison AI and Virtus Health.

Yuanjie Zheng - One of the best experts on this subject based on the ideXlab platform.

  • breast cancer multi classification from histopathological images with structured Deep Learning Model
    Scientific Reports, 2017
    Co-Authors: Zhongyi Han, Benzheng Wei, Yuanjie Zheng, Yilong Yin
    Abstract:

    Automated breast cancer multi-classification from histopathological images plays a key role in computer-aided breast cancer diagnosis or prognosis. Breast cancer multi-classification is to identify subordinate classes of breast cancer (Ductal carcinoma, Fibroadenoma, Lobular carcinoma, etc.). However, breast cancer multi-classification from histopathological images faces two main challenges from: (1) the great difficulties in breast cancer multi-classification methods contrasting with the classification of binary classes (benign and malignant), and (2) the subtle differences in multiple classes due to the broad variability of high-resolution image appearances, high coherency of cancerous cells, and extensive inhomogeneity of color distribution. Therefore, automated breast cancer multi-classification from histopathological images is of great clinical significance yet has never been explored. Existing works in literature only focus on the binary classification but do not support further breast cancer quantitative assessment. In this study, we propose a breast cancer multi-classification method using a newly proposed Deep Learning Model. The structured Deep Learning Model has achieved remarkable performance (average 93.2% accuracy) on a large-scale dataset, which demonstrates the strength of our method in providing an efficient tool for breast cancer multi-classification in clinical settings.

  • breast cancer multi classification from histopathological images with structured Deep Learning Model
    Scientific Reports, 2017
    Co-Authors: Yuanjie Zheng, Kejian Li, Shuo Li
    Abstract:

    Automated breast cancer multi-classification from histopathological images plays a key role in computer-aided breast cancer diagnosis or prognosis. Breast cancer multi-classification is to identify subordinate classes of breast cancer (Ductal carcinoma, Fibroadenoma, Lobular carcinoma, etc.). However, breast cancer multi-classification from histopathological images faces two main challenges from: (1) the great difficulties in breast cancer multi-classification methods contrasting with the classification of binary classes (benign and malignant), and (2) the subtle differences in multiple classes due to the broad variability of high-resolution image appearances, high coherency of cancerous cells, and extensive inhomogeneity of color distribution. Therefore, automated breast cancer multi-classification from histopathological images is of great clinical significance yet has never been explored. Existing works in literature only focus on the binary classification but do not support further breast cancer quantitative assessment. In this study, we propose a breast cancer multi-classification method using a newly proposed Deep Learning Model. The structured Deep Learning Model has achieved remarkable performance (average 93.2% accuracy) on a large-scale dataset, which demonstrates the strength of our method in providing an efficient tool for breast cancer multi-classification in clinical settings.

D Tran - One of the best experts on this subject based on the ideXlab platform.

  • Deep Learning as a predictive tool for fetal heart pregnancy following time lapse incubation and blastocyst transfer
    Human Reproduction, 2019
    Co-Authors: D Tran, Simon Cooke, P J Illingworth, D K Gardner
    Abstract:

    Study question Can a Deep Learning Model predict the probability of pregnancy with fetal heart (FH) from time-lapse videos? Summary answer We created a Deep Learning Model named IVY, which was an objective and fully automated system that predicts the probability of FH pregnancy directly from raw time-lapse videos without the need for any manual morphokinetic annotation or blastocyst morphology assessment. What is known already The contribution of time-lapse imaging in effective embryo selection is promising. Existing algorithms for the analysis of time-lapse imaging are based on morphology and morphokinetic parameters that require subjective human annotation and thus have intrinsic inter-reader and intra-reader variability. Deep Learning offers promise for the automation and standardization of embryo selection. Study design, size, duration A retrospective analysis of time-lapse videos and clinical outcomes of 10 638 embryos from eight different IVF clinics, across four different countries, between January 2014 and December 2018. Participants/materials, setting, methods The Deep Learning Model was trained using time-lapse videos with known FH pregnancy outcome to perform a binary classification task of predicting the probability of pregnancy with FH given time-lapse video sequence. The predictive power of the Model was measured using the average area under the curve (AUC) of the receiver operating characteristic curve over 5-fold stratified cross-validation. Main results and the role of chance The Deep Learning Model was able to predict FH pregnancy from time-lapse videos with an AUC of 0.93 [95% CI 0.92-0.94] in 5-fold stratified cross-validation. A hold-out validation test across eight laboratories showed that the AUC was reproducible, ranging from 0.95 to 0.90 across different laboratories with different culture and laboratory processes. Limitations, reasons for caution This study is a retrospective analysis demonstrating that the Deep Learning Model has a high level of predictability of the likelihood that an embryo will implant. The clinical impacts of these findings are still uncertain. Further studies, including prospective randomized controlled trials, are required to evaluate the clinical significance of this Deep Learning Model. The time-lapse videos collected for training and validation are Day 5 embryos; hence, additional adjustment would need to be made for the Model to be used in the context of Day 3 transfer. Wider implications of the findings The high predictive value for embryo implantation obtained by the Deep Learning Model may improve the effectiveness of previous approaches used for time-lapse imaging in embryo selection. This may improve the prioritization of the most viable embryo for a single embryo transfer. The Deep Learning Model may also prove to be useful in providing the optimal order for subsequent transfers of cryopreserved embryos. Study funding/competing interest(s) D.T. is the co-owner of Harrison AI that has patented this methodology in association with Virtus Health. P.I. is a shareholder in Virtus Health. S.C., P.I. and D.G. are all either employees or contracted with Virtus Health. D.G. has received grant support from Vitrolife, the manufacturer of the Embryoscope time-lapse imaging used in this study. The equipment and time for this study have been jointly provided by Harrison AI and Virtus Health.

  • Deep Learning as a predictive tool for fetal heart pregnancy following time lapse incubation and blastocyst transfer
    Human Reproduction, 2019
    Co-Authors: D Tran, Simon Cooke, P J Illingworth, D K Gardner
    Abstract:

    Study question Can a Deep Learning Model predict the probability of pregnancy with fetal heart (FH) from time-lapse videos? Summary answer We created a Deep Learning Model named IVY, which was an objective and fully automated system that predicts the probability of FH pregnancy directly from raw time-lapse videos without the need for any manual morphokinetic annotation or blastocyst morphology assessment. What is known already The contribution of time-lapse imaging in effective embryo selection is promising. Existing algorithms for the analysis of time-lapse imaging are based on morphology and morphokinetic parameters that require subjective human annotation and thus have intrinsic inter-reader and intra-reader variability. Deep Learning offers promise for the automation and standardization of embryo selection. Study design, size, duration A retrospective analysis of time-lapse videos and clinical outcomes of 10 638 embryos from eight different IVF clinics, across four different countries, between January 2014 and December 2018. Participants/materials, setting, methods The Deep Learning Model was trained using time-lapse videos with known FH pregnancy outcome to perform a binary classification task of predicting the probability of pregnancy with FH given time-lapse video sequence. The predictive power of the Model was measured using the average area under the curve (AUC) of the receiver operating characteristic curve over 5-fold stratified cross-validation. Main results and the role of chance The Deep Learning Model was able to predict FH pregnancy from time-lapse videos with an AUC of 0.93 [95% CI 0.92-0.94] in 5-fold stratified cross-validation. A hold-out validation test across eight laboratories showed that the AUC was reproducible, ranging from 0.95 to 0.90 across different laboratories with different culture and laboratory processes. Limitations, reasons for caution This study is a retrospective analysis demonstrating that the Deep Learning Model has a high level of predictability of the likelihood that an embryo will implant. The clinical impacts of these findings are still uncertain. Further studies, including prospective randomized controlled trials, are required to evaluate the clinical significance of this Deep Learning Model. The time-lapse videos collected for training and validation are Day 5 embryos; hence, additional adjustment would need to be made for the Model to be used in the context of Day 3 transfer. Wider implications of the findings The high predictive value for embryo implantation obtained by the Deep Learning Model may improve the effectiveness of previous approaches used for time-lapse imaging in embryo selection. This may improve the prioritization of the most viable embryo for a single embryo transfer. The Deep Learning Model may also prove to be useful in providing the optimal order for subsequent transfers of cryopreserved embryos. Study funding/competing interest(s) D.T. is the co-owner of Harrison AI that has patented this methodology in association with Virtus Health. P.I. is a shareholder in Virtus Health. S.C., P.I. and D.G. are all either employees or contracted with Virtus Health. D.G. has received grant support from Vitrolife, the manufacturer of the Embryoscope time-lapse imaging used in this study. The equipment and time for this study have been jointly provided by Harrison AI and Virtus Health.

Yilong Yin - One of the best experts on this subject based on the ideXlab platform.

  • breast cancer multi classification from histopathological images with structured Deep Learning Model
    Scientific Reports, 2017
    Co-Authors: Zhongyi Han, Benzheng Wei, Yuanjie Zheng, Yilong Yin
    Abstract:

    Automated breast cancer multi-classification from histopathological images plays a key role in computer-aided breast cancer diagnosis or prognosis. Breast cancer multi-classification is to identify subordinate classes of breast cancer (Ductal carcinoma, Fibroadenoma, Lobular carcinoma, etc.). However, breast cancer multi-classification from histopathological images faces two main challenges from: (1) the great difficulties in breast cancer multi-classification methods contrasting with the classification of binary classes (benign and malignant), and (2) the subtle differences in multiple classes due to the broad variability of high-resolution image appearances, high coherency of cancerous cells, and extensive inhomogeneity of color distribution. Therefore, automated breast cancer multi-classification from histopathological images is of great clinical significance yet has never been explored. Existing works in literature only focus on the binary classification but do not support further breast cancer quantitative assessment. In this study, we propose a breast cancer multi-classification method using a newly proposed Deep Learning Model. The structured Deep Learning Model has achieved remarkable performance (average 93.2% accuracy) on a large-scale dataset, which demonstrates the strength of our method in providing an efficient tool for breast cancer multi-classification in clinical settings.

Shuo Li - One of the best experts on this subject based on the ideXlab platform.

  • breast cancer multi classification from histopathological images with structured Deep Learning Model
    Scientific Reports, 2017
    Co-Authors: Yuanjie Zheng, Kejian Li, Shuo Li
    Abstract:

    Automated breast cancer multi-classification from histopathological images plays a key role in computer-aided breast cancer diagnosis or prognosis. Breast cancer multi-classification is to identify subordinate classes of breast cancer (Ductal carcinoma, Fibroadenoma, Lobular carcinoma, etc.). However, breast cancer multi-classification from histopathological images faces two main challenges from: (1) the great difficulties in breast cancer multi-classification methods contrasting with the classification of binary classes (benign and malignant), and (2) the subtle differences in multiple classes due to the broad variability of high-resolution image appearances, high coherency of cancerous cells, and extensive inhomogeneity of color distribution. Therefore, automated breast cancer multi-classification from histopathological images is of great clinical significance yet has never been explored. Existing works in literature only focus on the binary classification but do not support further breast cancer quantitative assessment. In this study, we propose a breast cancer multi-classification method using a newly proposed Deep Learning Model. The structured Deep Learning Model has achieved remarkable performance (average 93.2% accuracy) on a large-scale dataset, which demonstrates the strength of our method in providing an efficient tool for breast cancer multi-classification in clinical settings.