Dataset

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 1867305 Experts worldwide ranked by ideXlab platform

Anna Korhonen - One of the best experts on this subject based on the ideXlab platform.

  • A neural network multi-task learning approach to biomedical named entity recognition
    BMC Bioinformatics, 2017
    Co-Authors: Gamal Crichton, Billy Chiu, Sampo Pyysalo, Anna Korhonen
    Abstract:

    BackgroundNamed Entity Recognition (NER) is a key task in biomedical text mining. Accurate NER systems require task-specific, manually-annotated Datasets, which are expensive to develop and thus limited in size. Since such Datasets contain related but different information, an interesting question is whether it might be possible to use them together to improve NER performance. To investigate this, we develop supervised, multi-task, convolutional neural network models and apply them to a large number of varied existing biomedical named entity Datasets. Additionally, we investigated the effect of Dataset size on performance in both single- and multi-task settings.ResultsWe present a single-task model for NER, a Multi-output multi-task model and a Dependent multi-task model. We apply the three models to 15 biomedical Datasets containing multiple named entities including Anatomy, Chemical, Disease, Gene/Protein and Species. Each Dataset represent a task. The results from the single-task model and the multi-task models are then compared for evidence of benefits from Multi-task Learning.With the Multi-output multi-task model we observed an average F-score improvement of 0.8% when compared to the single-task model from an average baseline of 78.4%. Although there was a significant drop in performance on one Dataset, performance improves significantly for five Datasets by up to 6.3%. For the Dependent multi-task model we observed an average improvement of 0.4% when compared to the single-task model. There were no significant drops in performance on any Dataset, and performance improves significantly for six Datasets by up to 1.1%.The Dataset size experiments found that as Dataset size decreased, the multi-output model’s performance increased compared to the single-task model’s. Using 50, 25 and 10% of the training data resulted in an average drop of approximately 3.4, 8 and 16.7% respectively for the single-task model but approximately 0.2, 3.0 and 9.8% for the multi-task model.ConclusionsOur results show that, on average, the multi-task models produced better NER results than the single-task models trained on a single NER Dataset. We also found that Multi-task Learning is beneficial for small Datasets. Across the various settings the improvements are significant, demonstrating the benefit of Multi-task Learning for this task.

  • A neural network multi-task learning approach to biomedical named entity recognition
    BMC Bioinformatics, 2017
    Co-Authors: Gamal K. O. Crichton, Billy Chiu, Sampo Pyysalo, Anna Korhonen
    Abstract:

    Named Entity Recognition (NER) is a key task in biomedical text mining. Accurate NER systems require task-specific, manually-annotated Datasets, which are expensive to develop and thus limited in size. Since such Datasets contain related but different information, an interesting question is whether it might be possible to use them together to improve NER performance. To investigate this, we develop supervised, multi-task, convolutional neural network models and apply them to a large number of varied existing biomedical named entity Datasets. Additionally, we investigated the effect of Dataset size on performance in both single- and multi-task settings. We present a single-task model for NER, a Multi-output multi-task model and a Dependent multi-task model. We apply the three models to 15 biomedical Datasets containing multiple named entities including Anatomy, Chemical, Disease, Gene/Protein and Species. Each Dataset represent a task. The results from the single-task model and the multi-task models are then compared for evidence of benefits from Multi-task Learning. With the Multi-output multi-task model we observed an average F-score improvement of 0.8% when compared to the single-task model from an average baseline of 78.4%. Although there was a significant drop in performance on one Dataset, performance improves significantly for five Datasets by up to 6.3%. For the Dependent multi-task model we observed an average improvement of 0.4% when compared to the single-task model. There were no significant drops in performance on any Dataset, and performance improves significantly for six Datasets by up to 1.1%. The Dataset size experiments found that as Dataset size decreased, the multi-output model’s performance increased compared to the single-task model’s. Using 50, 25 and 10% of the training data resulted in an average drop of approximately 3.4, 8 and 16.7% respectively for the single-task model but approximately 0.2, 3.0 and 9.8% for the multi-task model. Our results show that, on average, the multi-task models produced better NER results than the single-task models trained on a single NER Dataset. We also found that Multi-task Learning is beneficial for small Datasets. Across the various settings the improvements are significant, demonstrating the benefit of Multi-task Learning for this task.

Gamal Crichton - One of the best experts on this subject based on the ideXlab platform.

  • A neural network multi-task learning approach to biomedical named entity recognition
    BMC Bioinformatics, 2017
    Co-Authors: Gamal Crichton, Billy Chiu, Sampo Pyysalo, Anna Korhonen
    Abstract:

    BackgroundNamed Entity Recognition (NER) is a key task in biomedical text mining. Accurate NER systems require task-specific, manually-annotated Datasets, which are expensive to develop and thus limited in size. Since such Datasets contain related but different information, an interesting question is whether it might be possible to use them together to improve NER performance. To investigate this, we develop supervised, multi-task, convolutional neural network models and apply them to a large number of varied existing biomedical named entity Datasets. Additionally, we investigated the effect of Dataset size on performance in both single- and multi-task settings.ResultsWe present a single-task model for NER, a Multi-output multi-task model and a Dependent multi-task model. We apply the three models to 15 biomedical Datasets containing multiple named entities including Anatomy, Chemical, Disease, Gene/Protein and Species. Each Dataset represent a task. The results from the single-task model and the multi-task models are then compared for evidence of benefits from Multi-task Learning.With the Multi-output multi-task model we observed an average F-score improvement of 0.8% when compared to the single-task model from an average baseline of 78.4%. Although there was a significant drop in performance on one Dataset, performance improves significantly for five Datasets by up to 6.3%. For the Dependent multi-task model we observed an average improvement of 0.4% when compared to the single-task model. There were no significant drops in performance on any Dataset, and performance improves significantly for six Datasets by up to 1.1%.The Dataset size experiments found that as Dataset size decreased, the multi-output model’s performance increased compared to the single-task model’s. Using 50, 25 and 10% of the training data resulted in an average drop of approximately 3.4, 8 and 16.7% respectively for the single-task model but approximately 0.2, 3.0 and 9.8% for the multi-task model.ConclusionsOur results show that, on average, the multi-task models produced better NER results than the single-task models trained on a single NER Dataset. We also found that Multi-task Learning is beneficial for small Datasets. Across the various settings the improvements are significant, demonstrating the benefit of Multi-task Learning for this task.

Billy Chiu - One of the best experts on this subject based on the ideXlab platform.

  • A neural network multi-task learning approach to biomedical named entity recognition
    BMC Bioinformatics, 2017
    Co-Authors: Gamal Crichton, Billy Chiu, Sampo Pyysalo, Anna Korhonen
    Abstract:

    BackgroundNamed Entity Recognition (NER) is a key task in biomedical text mining. Accurate NER systems require task-specific, manually-annotated Datasets, which are expensive to develop and thus limited in size. Since such Datasets contain related but different information, an interesting question is whether it might be possible to use them together to improve NER performance. To investigate this, we develop supervised, multi-task, convolutional neural network models and apply them to a large number of varied existing biomedical named entity Datasets. Additionally, we investigated the effect of Dataset size on performance in both single- and multi-task settings.ResultsWe present a single-task model for NER, a Multi-output multi-task model and a Dependent multi-task model. We apply the three models to 15 biomedical Datasets containing multiple named entities including Anatomy, Chemical, Disease, Gene/Protein and Species. Each Dataset represent a task. The results from the single-task model and the multi-task models are then compared for evidence of benefits from Multi-task Learning.With the Multi-output multi-task model we observed an average F-score improvement of 0.8% when compared to the single-task model from an average baseline of 78.4%. Although there was a significant drop in performance on one Dataset, performance improves significantly for five Datasets by up to 6.3%. For the Dependent multi-task model we observed an average improvement of 0.4% when compared to the single-task model. There were no significant drops in performance on any Dataset, and performance improves significantly for six Datasets by up to 1.1%.The Dataset size experiments found that as Dataset size decreased, the multi-output model’s performance increased compared to the single-task model’s. Using 50, 25 and 10% of the training data resulted in an average drop of approximately 3.4, 8 and 16.7% respectively for the single-task model but approximately 0.2, 3.0 and 9.8% for the multi-task model.ConclusionsOur results show that, on average, the multi-task models produced better NER results than the single-task models trained on a single NER Dataset. We also found that Multi-task Learning is beneficial for small Datasets. Across the various settings the improvements are significant, demonstrating the benefit of Multi-task Learning for this task.

  • A neural network multi-task learning approach to biomedical named entity recognition
    BMC Bioinformatics, 2017
    Co-Authors: Gamal K. O. Crichton, Billy Chiu, Sampo Pyysalo, Anna Korhonen
    Abstract:

    Named Entity Recognition (NER) is a key task in biomedical text mining. Accurate NER systems require task-specific, manually-annotated Datasets, which are expensive to develop and thus limited in size. Since such Datasets contain related but different information, an interesting question is whether it might be possible to use them together to improve NER performance. To investigate this, we develop supervised, multi-task, convolutional neural network models and apply them to a large number of varied existing biomedical named entity Datasets. Additionally, we investigated the effect of Dataset size on performance in both single- and multi-task settings. We present a single-task model for NER, a Multi-output multi-task model and a Dependent multi-task model. We apply the three models to 15 biomedical Datasets containing multiple named entities including Anatomy, Chemical, Disease, Gene/Protein and Species. Each Dataset represent a task. The results from the single-task model and the multi-task models are then compared for evidence of benefits from Multi-task Learning. With the Multi-output multi-task model we observed an average F-score improvement of 0.8% when compared to the single-task model from an average baseline of 78.4%. Although there was a significant drop in performance on one Dataset, performance improves significantly for five Datasets by up to 6.3%. For the Dependent multi-task model we observed an average improvement of 0.4% when compared to the single-task model. There were no significant drops in performance on any Dataset, and performance improves significantly for six Datasets by up to 1.1%. The Dataset size experiments found that as Dataset size decreased, the multi-output model’s performance increased compared to the single-task model’s. Using 50, 25 and 10% of the training data resulted in an average drop of approximately 3.4, 8 and 16.7% respectively for the single-task model but approximately 0.2, 3.0 and 9.8% for the multi-task model. Our results show that, on average, the multi-task models produced better NER results than the single-task models trained on a single NER Dataset. We also found that Multi-task Learning is beneficial for small Datasets. Across the various settings the improvements are significant, demonstrating the benefit of Multi-task Learning for this task.

Sampo Pyysalo - One of the best experts on this subject based on the ideXlab platform.

  • A neural network multi-task learning approach to biomedical named entity recognition
    BMC Bioinformatics, 2017
    Co-Authors: Gamal Crichton, Billy Chiu, Sampo Pyysalo, Anna Korhonen
    Abstract:

    BackgroundNamed Entity Recognition (NER) is a key task in biomedical text mining. Accurate NER systems require task-specific, manually-annotated Datasets, which are expensive to develop and thus limited in size. Since such Datasets contain related but different information, an interesting question is whether it might be possible to use them together to improve NER performance. To investigate this, we develop supervised, multi-task, convolutional neural network models and apply them to a large number of varied existing biomedical named entity Datasets. Additionally, we investigated the effect of Dataset size on performance in both single- and multi-task settings.ResultsWe present a single-task model for NER, a Multi-output multi-task model and a Dependent multi-task model. We apply the three models to 15 biomedical Datasets containing multiple named entities including Anatomy, Chemical, Disease, Gene/Protein and Species. Each Dataset represent a task. The results from the single-task model and the multi-task models are then compared for evidence of benefits from Multi-task Learning.With the Multi-output multi-task model we observed an average F-score improvement of 0.8% when compared to the single-task model from an average baseline of 78.4%. Although there was a significant drop in performance on one Dataset, performance improves significantly for five Datasets by up to 6.3%. For the Dependent multi-task model we observed an average improvement of 0.4% when compared to the single-task model. There were no significant drops in performance on any Dataset, and performance improves significantly for six Datasets by up to 1.1%.The Dataset size experiments found that as Dataset size decreased, the multi-output model’s performance increased compared to the single-task model’s. Using 50, 25 and 10% of the training data resulted in an average drop of approximately 3.4, 8 and 16.7% respectively for the single-task model but approximately 0.2, 3.0 and 9.8% for the multi-task model.ConclusionsOur results show that, on average, the multi-task models produced better NER results than the single-task models trained on a single NER Dataset. We also found that Multi-task Learning is beneficial for small Datasets. Across the various settings the improvements are significant, demonstrating the benefit of Multi-task Learning for this task.

  • A neural network multi-task learning approach to biomedical named entity recognition
    BMC Bioinformatics, 2017
    Co-Authors: Gamal K. O. Crichton, Billy Chiu, Sampo Pyysalo, Anna Korhonen
    Abstract:

    Named Entity Recognition (NER) is a key task in biomedical text mining. Accurate NER systems require task-specific, manually-annotated Datasets, which are expensive to develop and thus limited in size. Since such Datasets contain related but different information, an interesting question is whether it might be possible to use them together to improve NER performance. To investigate this, we develop supervised, multi-task, convolutional neural network models and apply them to a large number of varied existing biomedical named entity Datasets. Additionally, we investigated the effect of Dataset size on performance in both single- and multi-task settings. We present a single-task model for NER, a Multi-output multi-task model and a Dependent multi-task model. We apply the three models to 15 biomedical Datasets containing multiple named entities including Anatomy, Chemical, Disease, Gene/Protein and Species. Each Dataset represent a task. The results from the single-task model and the multi-task models are then compared for evidence of benefits from Multi-task Learning. With the Multi-output multi-task model we observed an average F-score improvement of 0.8% when compared to the single-task model from an average baseline of 78.4%. Although there was a significant drop in performance on one Dataset, performance improves significantly for five Datasets by up to 6.3%. For the Dependent multi-task model we observed an average improvement of 0.4% when compared to the single-task model. There were no significant drops in performance on any Dataset, and performance improves significantly for six Datasets by up to 1.1%. The Dataset size experiments found that as Dataset size decreased, the multi-output model’s performance increased compared to the single-task model’s. Using 50, 25 and 10% of the training data resulted in an average drop of approximately 3.4, 8 and 16.7% respectively for the single-task model but approximately 0.2, 3.0 and 9.8% for the multi-task model. Our results show that, on average, the multi-task models produced better NER results than the single-task models trained on a single NER Dataset. We also found that Multi-task Learning is beneficial for small Datasets. Across the various settings the improvements are significant, demonstrating the benefit of Multi-task Learning for this task.

Hans-jörg Althaus - One of the best experts on this subject based on the ideXlab platform.

  • Electric passenger car transport and passenger car life cycle inventories in ecoinvent version 3
    The International Journal of Life Cycle Assessment, 2016
    Co-Authors: Andrea Del Duce, Marcel Gauch, Hans-jörg Althaus
    Abstract:

    Purpose Due to the large environmental challenges posed by the transport sector, reliable and state-of-the art data for its life cycle assessment is essential for enabling a successful transition towards more sustainable systems. In this paper, the new electric passenger car transport and vehicle Datasets, which have been developed for ecoinvent version 3, are presented. Methods The new Datasets have been developed with a strong modular approach, defining a hierarchy of Datasets corresponding to various technical components in the vehicle. A vehicle is therefore modelled by linking together the various component Datasets. Also, parameters and mathematical formulas have been introduced in order to define the amount of exchanges in the Datasets through common transport and vehicle characteristics. This supports users in the choice of the amount of exchanges and enhances the transparency of the Dataset. Results The new transport Dataset describes the transport over 1 km with a battery electric passenger car taking into account the vehicle production and end of life, the energy consumption due to the use phase, non-exhaust emissions, maintenance and road infrastructure. The Dataset has been developed and is suitable for a compact class vehicle. Conclusions A new electric passenger car transport Dataset has been developed for version 3 of the ecoinvent database which exploits modularisation and parameters with the aim of facilitating users in adapting the data to their specific needs. Apart from the direct use of the transport Dataset for background data, the various Datasets for the different components can also be used as building blocks for virtual vehicles.