Multitask Learning

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 3456 Experts worldwide ranked by ideXlab platform

Nigam H Shah - One of the best experts on this subject based on the ideXlab platform.

  • PSB - The Effectiveness of Multitask Learning for Phenotyping with Electronic Health Records Data.
    Pacific Symposium on Biocomputing. Pacific Symposium on Biocomputing, 2018
    Co-Authors: Daisy Yi Ding, Chloe Simpson, Stephen R Pfohl, Dave C Kale, Kenneth Jung, Nigam H Shah
    Abstract:

    Electronic phenotyping is the task of ascertaining whether an individual has a medical condition of interest by analyzing their medical record and is foundational in clinical informatics. Increasingly, electronic phenotyping is performed via supervised Learning. We investigate the effectiveness of Multitask Learning for phenotyping using electronic health records (EHR) data. Multitask Learning aims to improve model performance on a target task by jointly Learning additional auxiliary tasks and has been used in disparate areas of machine Learning. However, its utility when applied to EHR data has not been established, and prior work suggests that its benefits are inconsistent. We present experiments that elucidate when Multitask Learning with neural nets improves performance for phenotyping using EHR data relative to neural nets trained for a single phenotype and to well-tuned baselines. We find that Multitask neural nets consistently outperform single-task neural nets for rare phenotypes but underperform for relatively more common phenotypes. The effect size increases as more auxiliary tasks are added. Moreover, Multitask Learning reduces the sensitivity of neural nets to hyperparameter settings for rare phenotypes. Last, we quantify phenotype complexity and find that neural nets trained with or without Multitask Learning do not improve on simple baselines unless the phenotypes are sufficiently complex.

  • the effectiveness of Multitask Learning for phenotyping with electronic health records data
    Pacific Symposium on Biocomputing, 2018
    Co-Authors: Daisy Yi Ding, Chloe Simpson, Stephen R Pfohl, Dave C Kale, Kenneth Jung, Nigam H Shah
    Abstract:

    : Electronic phenotyping is the task of ascertaining whether an individual has a medical condition of interest by analyzing their medical record and is foundational in clinical informatics. Increasingly, electronic phenotyping is performed via supervised Learning. We investigate the effectiveness of Multitask Learning for phenotyping using electronic health records (EHR) data. Multitask Learning aims to improve model performance on a target task by jointly Learning additional auxiliary tasks and has been used in disparate areas of machine Learning. However, its utility when applied to EHR data has not been established, and prior work suggests that its benefits are inconsistent. We present experiments that elucidate when Multitask Learning with neural nets improves performance for phenotyping using EHR data relative to neural nets trained for a single phenotype and to well-tuned baselines. We find that Multitask neural nets consistently outperform single-task neural nets for rare phenotypes but underperform for relatively more common phenotypes. The effect size increases as more auxiliary tasks are added. Moreover, Multitask Learning reduces the sensitivity of neural nets to hyperparameter settings for rare phenotypes. Last, we quantify phenotype complexity and find that neural nets trained with or without Multitask Learning do not improve on simple baselines unless the phenotypes are sufficiently complex.

  • the effectiveness of Multitask Learning for phenotyping with electronic health records data
    arXiv: Machine Learning, 2018
    Co-Authors: Daisy Yi Ding, Chloe Simpson, Stephen R Pfohl, Dave C Kale, Kenneth Jung, Nigam H Shah
    Abstract:

    Electronic phenotyping is the task of ascertaining whether an individual has a medical condition of interest by analyzing their medical record and is foundational in clinical informatics. Increasingly, electronic phenotyping is performed via supervised Learning. We investigate the effectiveness of Multitask Learning for phenotyping using electronic health records (EHR) data. Multitask Learning aims to improve model performance on a target task by jointly Learning additional auxiliary tasks and has been used in disparate areas of machine Learning. However, its utility when applied to EHR data has not been established, and prior work suggests that its benefits are inconsistent. We present experiments that elucidate when Multitask Learning with neural nets improves performance for phenotyping using EHR data relative to neural nets trained for a single phenotype and to well-tuned logistic regression baselines. We find that Multitask neural nets consistently outperform single-task neural nets for rare phenotypes but underperform for relatively more common phenotypes. The effect size increases as more auxiliary tasks are added. Moreover, Multitask Learning reduces the sensitivity of neural nets to hyperparameter settings for rare phenotypes. Last, we quantify phenotype complexity and find that neural nets trained with or without Multitask Learning do not improve on simple baselines unless the phenotypes are sufficiently complex.

Jiebo Luo - One of the best experts on this subject based on the ideXlab platform.

  • AAAI - Neural Simile Recognition with Cyclic Multitask Learning and Local Attention
    2020
    Co-Authors: Jiali Zeng, Linfeng Song, Jun Xie, Wei Song, Jiebo Luo
    Abstract:

    Simile recognition is to detect simile sentences and to extract simile components, i.e., tenors and vehicles. It involves two subtasks: simile sentence classification and simile component extraction. Recent work has shown that standard Multitask Learning is effective for Chinese simile recognition, but it is still uncertain whether the mutual effects between the subtasks have been well captured by simple parameter sharing. We propose a novel cyclic Multitask Learning framework for neural simile recognition, which stacks the subtasks and makes them into a loop by connecting the last to the first. It iteratively performs each subtask, taking the outputs of the previous subtask as additional inputs to the current one, so that the interdependence between the subtasks can be better explored. Extensive experiments show that our framework significantly outperforms the current state-of-the-art model and our carefully designed baselines, and the gains are still remarkable using BERT. Source Code of this paper are available on https://github.com/DeepLearnXMU/Cyclic.

  • Neural Simile Recognition with Cyclic Multitask Learning and Local Attention
    Proceedings of the AAAI Conference on Artificial Intelligence, 2020
    Co-Authors: Jiali Zeng, Linfeng Song, Jun Xie, Wei Song, Jiebo Luo
    Abstract:

    Simile recognition is to detect simile sentences and to extract simile components, i.e., tenors and vehicles. It involves two subtasks: simile sentence classification and simile component extraction. Recent work has shown that standard Multitask Learning is effective for Chinese simile recognition, but it is still uncertain whether the mutual effects between the subtasks have been well captured by simple parameter sharing. We propose a novel cyclic Multitask Learning framework for neural simile recognition, which stacks the subtasks and makes them into a loop by connecting the last to the first. It iteratively performs each subtask, taking the outputs of the previous subtask as additional inputs to the current one, so that the interdependence between the subtasks can be better explored. Extensive experiments show that our framework significantly outperforms the current state-of-the-art model and our carefully designed baselines, and the gains are still remarkable using BERT. Source Code of this paper are available on https://github.com/DeepLearnXMU/Cyclic.

  • Neural Simile Recognition with Cyclic Multitask Learning and Local Attention
    arXiv: Computation and Language, 2019
    Co-Authors: Jiali Zeng, Linfeng Song, Jun Xie, Wei Song, Jiebo Luo
    Abstract:

    Simile recognition is to detect simile sentences and to extract simile components, i.e., tenors and vehicles. It involves two subtasks: {\it simile sentence classification} and {\it simile component extraction}. Recent work has shown that standard Multitask Learning is effective for Chinese simile recognition, but it is still uncertain whether the mutual effects between the subtasks have been well captured by simple parameter sharing. We propose a novel cyclic Multitask Learning framework for neural simile recognition, which stacks the subtasks and makes them into a loop by connecting the last to the first. It iteratively performs each subtask, taking the outputs of the previous subtask as additional inputs to the current one, so that the interdependence between the subtasks can be better explored. Extensive experiments show that our framework significantly outperforms the current state-of-the-art model and our carefully designed baselines, and the gains are still remarkable using BERT.

Soo-young Lee - One of the best experts on this subject based on the ideXlab platform.

  • ICASSP - Emotional Voice Conversion Using Multitask Learning with Text-To-Speech
    ICASSP 2020 - 2020 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 2020
    Co-Authors: Tae-ho Kim, Sungjae Cho, Choi Shinkook, Sejik Park, Soo-young Lee
    Abstract:

    Voice conversion (VC) is a task that alters the voice of a person to suit different styles while conserving the linguistic content. Previous state-of-the-art technology used in VC was based on the sequence-to-sequence (seq2seq) model, which could lose linguistic information. There was an attempt to overcome this problem using textual supervision; however, this required explicit alignment, and therefore the benefit of using seq2seq model was lost. In this study, a voice converter that utilizes Multitask Learning with text-to-speech (TTS) is presented. By using Multitask Learning, VC is expected to capture linguistic information and preserve the training stability. This method does not require explicit alignment for capturing abundant text information. Experiments on VC were performed on a male-Korean-emotional-text-speech dataset to convert the neutral voice to emotional voice. It was shown that Multitask Learning helps to preserve the linguistic content.

  • Emotional Voice Conversion using Multitask Learning with Text-to-speech
    arXiv: Audio and Speech Processing, 2019
    Co-Authors: Tae-ho Kim, Sungjae Cho, Choi Shinkook, Sejik Park, Soo-young Lee
    Abstract:

    Voice conversion (VC) is a task to transform a person's voice to different style while conserving linguistic contents. Previous state-of-the-art on VC is based on sequence-to-sequence (seq2seq) model, which could mislead linguistic information. There was an attempt to overcome it by using textual supervision, it requires explicit alignment which loses the benefit of using seq2seq model. In this paper, a voice converter using Multitask Learning with text-to-speech (TTS) is presented. The embedding space of seq2seq-based TTS has abundant information on the text. The role of the decoder of TTS is to convert embedding space to speech, which is same to VC. In the proposed model, the whole network is trained to minimize loss of VC and TTS. VC is expected to capture more linguistic information and to preserve training stability by Multitask Learning. Experiments of VC were performed on a male Korean emotional text-speech dataset, and it is shown that Multitask Learning is helpful to keep linguistic contents in VC.

Daisy Yi Ding - One of the best experts on this subject based on the ideXlab platform.

  • PSB - The Effectiveness of Multitask Learning for Phenotyping with Electronic Health Records Data.
    Pacific Symposium on Biocomputing. Pacific Symposium on Biocomputing, 2018
    Co-Authors: Daisy Yi Ding, Chloe Simpson, Stephen R Pfohl, Dave C Kale, Kenneth Jung, Nigam H Shah
    Abstract:

    Electronic phenotyping is the task of ascertaining whether an individual has a medical condition of interest by analyzing their medical record and is foundational in clinical informatics. Increasingly, electronic phenotyping is performed via supervised Learning. We investigate the effectiveness of Multitask Learning for phenotyping using electronic health records (EHR) data. Multitask Learning aims to improve model performance on a target task by jointly Learning additional auxiliary tasks and has been used in disparate areas of machine Learning. However, its utility when applied to EHR data has not been established, and prior work suggests that its benefits are inconsistent. We present experiments that elucidate when Multitask Learning with neural nets improves performance for phenotyping using EHR data relative to neural nets trained for a single phenotype and to well-tuned baselines. We find that Multitask neural nets consistently outperform single-task neural nets for rare phenotypes but underperform for relatively more common phenotypes. The effect size increases as more auxiliary tasks are added. Moreover, Multitask Learning reduces the sensitivity of neural nets to hyperparameter settings for rare phenotypes. Last, we quantify phenotype complexity and find that neural nets trained with or without Multitask Learning do not improve on simple baselines unless the phenotypes are sufficiently complex.

  • the effectiveness of Multitask Learning for phenotyping with electronic health records data
    Pacific Symposium on Biocomputing, 2018
    Co-Authors: Daisy Yi Ding, Chloe Simpson, Stephen R Pfohl, Dave C Kale, Kenneth Jung, Nigam H Shah
    Abstract:

    : Electronic phenotyping is the task of ascertaining whether an individual has a medical condition of interest by analyzing their medical record and is foundational in clinical informatics. Increasingly, electronic phenotyping is performed via supervised Learning. We investigate the effectiveness of Multitask Learning for phenotyping using electronic health records (EHR) data. Multitask Learning aims to improve model performance on a target task by jointly Learning additional auxiliary tasks and has been used in disparate areas of machine Learning. However, its utility when applied to EHR data has not been established, and prior work suggests that its benefits are inconsistent. We present experiments that elucidate when Multitask Learning with neural nets improves performance for phenotyping using EHR data relative to neural nets trained for a single phenotype and to well-tuned baselines. We find that Multitask neural nets consistently outperform single-task neural nets for rare phenotypes but underperform for relatively more common phenotypes. The effect size increases as more auxiliary tasks are added. Moreover, Multitask Learning reduces the sensitivity of neural nets to hyperparameter settings for rare phenotypes. Last, we quantify phenotype complexity and find that neural nets trained with or without Multitask Learning do not improve on simple baselines unless the phenotypes are sufficiently complex.

  • the effectiveness of Multitask Learning for phenotyping with electronic health records data
    arXiv: Machine Learning, 2018
    Co-Authors: Daisy Yi Ding, Chloe Simpson, Stephen R Pfohl, Dave C Kale, Kenneth Jung, Nigam H Shah
    Abstract:

    Electronic phenotyping is the task of ascertaining whether an individual has a medical condition of interest by analyzing their medical record and is foundational in clinical informatics. Increasingly, electronic phenotyping is performed via supervised Learning. We investigate the effectiveness of Multitask Learning for phenotyping using electronic health records (EHR) data. Multitask Learning aims to improve model performance on a target task by jointly Learning additional auxiliary tasks and has been used in disparate areas of machine Learning. However, its utility when applied to EHR data has not been established, and prior work suggests that its benefits are inconsistent. We present experiments that elucidate when Multitask Learning with neural nets improves performance for phenotyping using EHR data relative to neural nets trained for a single phenotype and to well-tuned logistic regression baselines. We find that Multitask neural nets consistently outperform single-task neural nets for rare phenotypes but underperform for relatively more common phenotypes. The effect size increases as more auxiliary tasks are added. Moreover, Multitask Learning reduces the sensitivity of neural nets to hyperparameter settings for rare phenotypes. Last, we quantify phenotype complexity and find that neural nets trained with or without Multitask Learning do not improve on simple baselines unless the phenotypes are sufficiently complex.

Jiali Zeng - One of the best experts on this subject based on the ideXlab platform.

  • AAAI - Neural Simile Recognition with Cyclic Multitask Learning and Local Attention
    2020
    Co-Authors: Jiali Zeng, Linfeng Song, Jun Xie, Wei Song, Jiebo Luo
    Abstract:

    Simile recognition is to detect simile sentences and to extract simile components, i.e., tenors and vehicles. It involves two subtasks: simile sentence classification and simile component extraction. Recent work has shown that standard Multitask Learning is effective for Chinese simile recognition, but it is still uncertain whether the mutual effects between the subtasks have been well captured by simple parameter sharing. We propose a novel cyclic Multitask Learning framework for neural simile recognition, which stacks the subtasks and makes them into a loop by connecting the last to the first. It iteratively performs each subtask, taking the outputs of the previous subtask as additional inputs to the current one, so that the interdependence between the subtasks can be better explored. Extensive experiments show that our framework significantly outperforms the current state-of-the-art model and our carefully designed baselines, and the gains are still remarkable using BERT. Source Code of this paper are available on https://github.com/DeepLearnXMU/Cyclic.

  • Neural Simile Recognition with Cyclic Multitask Learning and Local Attention
    Proceedings of the AAAI Conference on Artificial Intelligence, 2020
    Co-Authors: Jiali Zeng, Linfeng Song, Jun Xie, Wei Song, Jiebo Luo
    Abstract:

    Simile recognition is to detect simile sentences and to extract simile components, i.e., tenors and vehicles. It involves two subtasks: simile sentence classification and simile component extraction. Recent work has shown that standard Multitask Learning is effective for Chinese simile recognition, but it is still uncertain whether the mutual effects between the subtasks have been well captured by simple parameter sharing. We propose a novel cyclic Multitask Learning framework for neural simile recognition, which stacks the subtasks and makes them into a loop by connecting the last to the first. It iteratively performs each subtask, taking the outputs of the previous subtask as additional inputs to the current one, so that the interdependence between the subtasks can be better explored. Extensive experiments show that our framework significantly outperforms the current state-of-the-art model and our carefully designed baselines, and the gains are still remarkable using BERT. Source Code of this paper are available on https://github.com/DeepLearnXMU/Cyclic.

  • Neural Simile Recognition with Cyclic Multitask Learning and Local Attention
    arXiv: Computation and Language, 2019
    Co-Authors: Jiali Zeng, Linfeng Song, Jun Xie, Wei Song, Jiebo Luo
    Abstract:

    Simile recognition is to detect simile sentences and to extract simile components, i.e., tenors and vehicles. It involves two subtasks: {\it simile sentence classification} and {\it simile component extraction}. Recent work has shown that standard Multitask Learning is effective for Chinese simile recognition, but it is still uncertain whether the mutual effects between the subtasks have been well captured by simple parameter sharing. We propose a novel cyclic Multitask Learning framework for neural simile recognition, which stacks the subtasks and makes them into a loop by connecting the last to the first. It iteratively performs each subtask, taking the outputs of the previous subtask as additional inputs to the current one, so that the interdependence between the subtasks can be better explored. Extensive experiments show that our framework significantly outperforms the current state-of-the-art model and our carefully designed baselines, and the gains are still remarkable using BERT.