Temporal Relation

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 360 Experts worldwide ranked by ideXlab platform

Guergana K Savova - One of the best experts on this subject based on the ideXlab platform.

  • a bert based universal model for both within and cross sentence clinical Temporal Relation extraction
    Proceedings of the 2nd Clinical Natural Language Processing Workshop, 2019
    Co-Authors: Chen Lin, Dmitriy Dligach, Steven Bethard, Timothy A Miller, Guergana K Savova
    Abstract:

    Classic methods for clinical Temporal Relation extraction focus on Relational candidates within a sentence. On the other hand, break-through Bidirectional Encoder Representations from Transformers (BERT) are trained on large quantities of arbitrary spans of contiguous text instead of sentences. In this study, we aim to build a sentence-agnostic framework for the task of CONTAINS Temporal Relation extraction. We establish a new state-of-the-art result for the task, 0.684F for in-domain (0.055-point improvement) and 0.565F for cross-domain (0.018-point improvement), by fine-tuning BERT and pre-training domain-specific BERT models on sentence-agnostic Temporal Relation instances with WordPiece-compatible encodings, and augmenting the labeled data with automatically generated “silver” instances.

  • self training improves recurrent neural networks performance for Temporal Relation extraction
    Empirical Methods in Natural Language Processing, 2018
    Co-Authors: Chen Lin, Steven Bethard, Dmitriy Dligach, Timothy A Miller, Hadi Amiri, Guergana K Savova
    Abstract:

    Neural network models are oftentimes restricted by limited labeled instances and resort to advanced architectures and features for cutting edge performance. We propose to build a recurrent neural network with multiple semantically heterogeneous embeddings within a self-training framework. Our framework makes use of labeled, unlabeled, and social media data, operates on basic features, and is scalable and generalizable. With this method, we establish the state-of-the-art result for both in- and cross-domain for a clinical Temporal Relation extraction task.

  • Neural Temporal Relation Extraction
    European Association for Computational Linguistics, 2017
    Co-Authors: Dmitriy Dligach, Tim Miller, Steven Bethard, Chen Lin, Guergana K Savova
    Abstract:

    We experiment with neural architectures for Temporal Relation extraction and establish a new state-of-the-art for several scenarios. We find that neural models with only tokens as input outperform state-of-the-art hand-engineered feature-based models, that convolutional neural networks out-perform LSTM models, and that encoding Relation arguments with XML tags outperforms a traditional position-based encoding.

  • Representations of Time Expressions for Temporal Relation Extraction with Convolutional Neural Networks
    BioNLP 2017, 2017
    Co-Authors: Chen Lin, Timothy Miller, Steven Bethard, Dmitriy Dligach, Guergana K Savova
    Abstract:

    Token sequences are often used as the in-put for Convolutional Neural Networks (CNNs) in natural language processing. However, they might not be an ideal repre-sentation for time expressions, which are long, highly varied, and semantically com-plex. We describe a method for represent-ing time expressions with single pseudo-tokens for CNNs. With this method, we establish a new state-of-the-art result for a clinical Temporal Relation extraction task.

  • multilayered Temporal modeling for the clinical domain
    Journal of the American Medical Informatics Association, 2016
    Co-Authors: Chen Lin, Guergana K Savova, Steven Bethard, Dmitriy Dligach, Timothy A Miller
    Abstract:

    Objective To develop an open-source Temporal Relation discovery system for the clinical domain. The system is capable of automatically inferring Temporal Relations between events and time expressions using a multilayered modeling strategy. It can operate at different levels of granularity—from rough Temporality expressed as event Relations to the document creation time (DCT) to Temporal containment to fine-grained classic Allen-style Relations. Materials and Methods We evaluated our systems on 2 clinical corpora. One is a subset of the Temporal Histories of Your Medical Events (THYME) corpus, which was used in SemEval 2015 Task 6: Clinical TempEval. The other is the 2012 Informatics for Integrating Biology and the Bedside (i2b2) challenge corpus. We designed multiple supervised machine learning models to compute the DCT Relation and within-sentence Temporal Relations. For the i2b2 data, we also developed models and rule-based methods to recognize cross-sentence Temporal Relations. We used the official evaluation scripts of both challenges to make our results comparable with results of other participating systems. In addition, we conducted a feature ablation study to find out the contribution of various features to the system’s performance. Results Our system achieved state-of-the-art performance on the Clinical TempEval corpus and was on par with the best systems on the i2b2 2012 corpus. Particularly, on the Clinical TempEval corpus, our system established a new F1 score benchmark, statistically significant as compared to the baseline and the best participating system. Conclusion Presented here is the first open-source clinical Temporal Relation discovery system. It was built using a multilayered Temporal modeling strategy and achieved top performance in 2 major shared tasks.

Steven Bethard - One of the best experts on this subject based on the ideXlab platform.

  • a bert based universal model for both within and cross sentence clinical Temporal Relation extraction
    Proceedings of the 2nd Clinical Natural Language Processing Workshop, 2019
    Co-Authors: Chen Lin, Dmitriy Dligach, Steven Bethard, Timothy A Miller, Guergana K Savova
    Abstract:

    Classic methods for clinical Temporal Relation extraction focus on Relational candidates within a sentence. On the other hand, break-through Bidirectional Encoder Representations from Transformers (BERT) are trained on large quantities of arbitrary spans of contiguous text instead of sentences. In this study, we aim to build a sentence-agnostic framework for the task of CONTAINS Temporal Relation extraction. We establish a new state-of-the-art result for the task, 0.684F for in-domain (0.055-point improvement) and 0.565F for cross-domain (0.018-point improvement), by fine-tuning BERT and pre-training domain-specific BERT models on sentence-agnostic Temporal Relation instances with WordPiece-compatible encodings, and augmenting the labeled data with automatically generated “silver” instances.

  • self training improves recurrent neural networks performance for Temporal Relation extraction
    Empirical Methods in Natural Language Processing, 2018
    Co-Authors: Chen Lin, Steven Bethard, Dmitriy Dligach, Timothy A Miller, Hadi Amiri, Guergana K Savova
    Abstract:

    Neural network models are oftentimes restricted by limited labeled instances and resort to advanced architectures and features for cutting edge performance. We propose to build a recurrent neural network with multiple semantically heterogeneous embeddings within a self-training framework. Our framework makes use of labeled, unlabeled, and social media data, operates on basic features, and is scalable and generalizable. With this method, we establish the state-of-the-art result for both in- and cross-domain for a clinical Temporal Relation extraction task.

  • Neural Temporal Relation Extraction
    European Association for Computational Linguistics, 2017
    Co-Authors: Dmitriy Dligach, Tim Miller, Steven Bethard, Chen Lin, Guergana K Savova
    Abstract:

    We experiment with neural architectures for Temporal Relation extraction and establish a new state-of-the-art for several scenarios. We find that neural models with only tokens as input outperform state-of-the-art hand-engineered feature-based models, that convolutional neural networks out-perform LSTM models, and that encoding Relation arguments with XML tags outperforms a traditional position-based encoding.

  • Representations of Time Expressions for Temporal Relation Extraction with Convolutional Neural Networks
    BioNLP 2017, 2017
    Co-Authors: Chen Lin, Timothy Miller, Steven Bethard, Dmitriy Dligach, Guergana K Savova
    Abstract:

    Token sequences are often used as the in-put for Convolutional Neural Networks (CNNs) in natural language processing. However, they might not be an ideal repre-sentation for time expressions, which are long, highly varied, and semantically com-plex. We describe a method for represent-ing time expressions with single pseudo-tokens for CNNs. With this method, we establish a new state-of-the-art result for a clinical Temporal Relation extraction task.

  • multilayered Temporal modeling for the clinical domain
    Journal of the American Medical Informatics Association, 2016
    Co-Authors: Chen Lin, Guergana K Savova, Steven Bethard, Dmitriy Dligach, Timothy A Miller
    Abstract:

    Objective To develop an open-source Temporal Relation discovery system for the clinical domain. The system is capable of automatically inferring Temporal Relations between events and time expressions using a multilayered modeling strategy. It can operate at different levels of granularity—from rough Temporality expressed as event Relations to the document creation time (DCT) to Temporal containment to fine-grained classic Allen-style Relations. Materials and Methods We evaluated our systems on 2 clinical corpora. One is a subset of the Temporal Histories of Your Medical Events (THYME) corpus, which was used in SemEval 2015 Task 6: Clinical TempEval. The other is the 2012 Informatics for Integrating Biology and the Bedside (i2b2) challenge corpus. We designed multiple supervised machine learning models to compute the DCT Relation and within-sentence Temporal Relations. For the i2b2 data, we also developed models and rule-based methods to recognize cross-sentence Temporal Relations. We used the official evaluation scripts of both challenges to make our results comparable with results of other participating systems. In addition, we conducted a feature ablation study to find out the contribution of various features to the system’s performance. Results Our system achieved state-of-the-art performance on the Clinical TempEval corpus and was on par with the best systems on the i2b2 2012 corpus. Particularly, on the Clinical TempEval corpus, our system established a new F1 score benchmark, statistically significant as compared to the baseline and the best participating system. Conclusion Presented here is the first open-source clinical Temporal Relation discovery system. It was built using a multilayered Temporal modeling strategy and achieved top performance in 2 major shared tasks.

Nanyun Peng - One of the best experts on this subject based on the ideXlab platform.

  • domain knowledge empowered structured neural net for end to end event Temporal Relation extraction
    Empirical Methods in Natural Language Processing, 2020
    Co-Authors: Rujun Han, Yichao Zhou, Nanyun Peng
    Abstract:

    Extracting event Temporal Relations is a critical task for information extraction and plays an important role in natural language understanding. Prior systems leverage deep learning and pre-trained language models to improve the performance of the task. However, these systems often suffer from two shortcomings: 1) when performing maximum a posteriori (MAP) inference based on neural models, previous systems only used structured knowledge that is assumed to be absolutely correct, i.e., hard constraints; 2) biased predictions on dominant Temporal Relations when training with a limited amount of data. To address these issues, we propose a framework that enhances deep neural network with distributional constraints constructed by probabilistic domain knowledge. We solve the constrained inference problem via Lagrangian Relaxation and apply it to end-to-end event Temporal Relation extraction tasks. Experimental results show our framework is able to improve the baseline neural network models with strong statistical significance on two widely used datasets in news and clinical domains.

  • domain knowledge empowered structured neural net for end to end event Temporal Relation extraction
    arXiv: Computation and Language, 2020
    Co-Authors: Yichao Zhou, Nanyun Peng
    Abstract:

    Extracting event Temporal Relations is a critical task for information extraction and plays an important role in natural language understanding. Prior systems leverage deep learning and pre-trained language models to improve the performance of the task. However, these systems often suffer from two short-comings: 1) when performing maximum a posteriori (MAP) inference based on neural models, previous systems only used structured knowledge that are assumed to be absolutely correct, i.e., hard constraints; 2) biased predictions on dominant Temporal Relations when training with a limited amount of data. To address these issues, we propose a framework that enhances deep neural network with distributional constraints constructed by probabilistic domain knowledge. We solve the constrained inference problem via Lagrangian Relaxation and apply it on end-to-end event Temporal Relation extraction tasks. Experimental results show our framework is able to improve the baseline neural network models with strong statistical significance on two widely used datasets in news and clinical domains.

  • deep structured neural network for event Temporal Relation extraction
    Conference on Computational Natural Language Learning, 2019
    Co-Authors: Rujun Han, Ihung Hsu, Mu Yang, Aram Galstyan, Ralph Weischedel, Nanyun Peng
    Abstract:

    We propose a novel deep structured learning framework for event Temporal Relation extraction. The model consists of 1) a recurrent neural network (RNN) to learn scoring functions for pair-wise Relations, and 2) a structured support vector machine (SSVM) to make joint predictions. The neural network automatically learns representations that account for long-term contexts to provide robust features for the structured model, while the SSVM incorporates domain knowledge such as transitive closure of Temporal Relations as constraints to make better globally consistent decisions. By jointly training the two components, our model combines the benefits of both data-driven learning and knowledge exploitation. Experimental results on three high-quality event Temporal Relation datasets (TCR, MATRES, and TB-Dense) demonstrate that incorporated with pre-trained contextualized embeddings, the proposed model achieves significantly better performances than the state-of-the-art methods on all three datasets. We also provide thorough ablation studies to investigate our model.

  • joint event and Temporal Relation extraction with shared representations and structured prediction
    arXiv: Computation and Language, 2019
    Co-Authors: Rujun Han, Qiang Ning, Nanyun Peng
    Abstract:

    We propose a joint event and Temporal Relation extraction model with shared representation learning and structured prediction. The proposed method has two advantages over existing work. First, it improves event representation by allowing the event and Relation modules to share the same contextualized embeddings and neural representation learner. Second, it avoids error propagation in the conventional pipeline systems by leveraging structured inference and learning methods to assign both the event labels and the Temporal Relation labels jointly. Experiments show that the proposed method can improve both event extraction and Temporal Relation extraction over state-of-the-art systems, with the end-to-end F1 improved by 10% and 6.8% on two benchmark datasets respectively.

S K Solanki - One of the best experts on this subject based on the ideXlab platform.

  • Temporal Relation between quiet sun transverse fields and the strong flows detected by imax sunrise
    Astronomy and Astrophysics, 2013
    Co-Authors: Quintero C Noda, Martinez V Pillet, J M Borrero, S K Solanki
    Abstract:

    Context. Localized strongly Doppler-shifted Stokes V signals were detected by IMaX/SUNRISE. These signals are related to newly emerged magnetic loops that are observed as linear polarization features. Aims. We aim to set constraints on the physical nature and causes of these highly Doppler-shifted signals. In particular, the Temporal Relation between the appearance of transverse fields and the strong Doppler shifts is analyzed in some detail. Methods. We calculated the time difference between the appearance of the strong flows and the linear polarization. We also obtained the distances from the center of various features to the nearest neutral lines and whether they overlap or not. These distances were compared with those obtained from randomly distributed points on observed magnetograms. Various cases of strong flows are described in some detail. Results. The linear polarization signals precede the appearance of the strong flows by on average 84±11 seconds. The strongly Doppler-shifted signals are closer (0. ′′ 19) to magnetic neutral lines than randomly distributed points (0. ′′ 5). Eighty percent of the strongly Doppler-shifted signals are close to a neutral line that is located between the emerging field and pre-existing fields. That the remaining 20% do not show a close-by pre-existing field could be explained by a lack of sensitivity or an unfavorable geometry of the pre-existing field, for instance, a canopy-like structure. Conclusions. Transverse fields occurred before the observation of the strong Doppler shifts. The process is most naturally explained as the emergence of a granular-scale loop that first gives rise to the linear polarization signals, interacts with pre-existing fields (generating new neutral line configurations), and produces the observed strong flows. This explanation is indicative of frequent small-scale reconnection events in the quiet Sun.

  • Temporal Relation between quiet sun transverse fields and the strong flows detected by imax sunrise
    arXiv: Solar and Stellar Astrophysics, 2013
    Co-Authors: Quintero C Noda, Martinez V Pillet, J M Borrero, S K Solanki
    Abstract:

    Localized strongly Doppler-shifted Stokes V signals were detected by IMaX/SUNRISE. These signals are related to newly emerged magnetic loops that are observed as linear polarization features. We aim to set constraints on the physical nature and causes of these highly Doppler-shifted signals. In particular, the Temporal Relation between the appearance of transverse fields and the strong Doppler shifts is analyzed in some detail. We calculated the time difference between the appearance of the strong flows and the linear polarization. We also obtained the distances from the center of various features to the nearest neutral lines and whether they overlap or not. These distances were compared with those obtained from randomly distributed points on observed magnetograms. Various cases of strong flows are described in some detail. The linear polarization signals precede the appearance of the strong flows by on average 84+-11 seconds. The strongly Doppler-shifted signals are closer (0.19") to magnetic neutral lines than randomly distributed points (0.5"). Eighty percent of the strongly Doppler-shifted signals are close to a neutral line that is located between the emerging field and pre-existing fields. That the remaining 20% do not show a close-by pre-existing field could be explained by a lack of sensitivity or an unfavorable geometry of the pre-existing field, for instance, a canopy-like structure. Transverse fields occurred before the observation of the strong Doppler shifts. The process is most naturally explained as the emergence of a granular-scale loop that first gives rise to the linear polarization signals, interacts with pre-existing fields (generating new neutral line configurations), and produces the observed strong flows. This explanation is indicative of frequent small-scale reconnection events in the quiet Sun.

Dan Roth - One of the best experts on this subject based on the ideXlab platform.

  • an improved neural baseline for Temporal Relation extraction
    arXiv: Computation and Language, 2019
    Co-Authors: Qiang Ning, Sanjay Subramanian, Dan Roth
    Abstract:

    Determining Temporal Relations (e.g., before or after) between events has been a challenging natural language understanding task, partly due to the difficulty to generate large amounts of high-quality training data. Consequently, neural approaches have not been widely used on it, or showed only moderate improvements. This paper proposes a new neural system that achieves about 10% absolute improvement in accuracy over the previous best system (25% error reduction) on two benchmark datasets. The proposed system is trained on the state-of-the-art MATRES dataset and applies contextualized word embeddings, a Siamese encoder of a Temporal common sense knowledge base, and global inference via integer linear programming (ILP). We suggest that the new approach could serve as a strong baseline for future research in this area.

  • improving Temporal Relation extraction with a globally acquired statistical resource
    North American Chapter of the Association for Computational Linguistics, 2018
    Co-Authors: Qiang Ning, Haoruo Peng, Dan Roth
    Abstract:

    Extracting Temporal Relations (before, after, overlapping, etc.) is a key aspect of understanding events described in natural language. We argue that this task would gain from the availability of a resource that provides prior knowledge in the form of the Temporal order that events usually follow. This paper develops such a resource – a probabilistic knowledge base acquired in the news domain – by extracting Temporal Relations between events from the New York Times (NYT) articles over a 20-year span (1987–2007). We show that existing Temporal extraction systems can be improved via this resource. As a byproduct, we also show that interesting statistics can be retrieved from this resource, which can potentially benefit other time-aware tasks. The proposed system and resource are both publicly available.

  • exploiting partially annotated data in Temporal Relation extraction
    Joint Conference on Lexical and Computational Semantics, 2018
    Co-Authors: Qiang Ning, Chuchu Fan, Dan Roth
    Abstract:

    Annotating Temporal Relations (TempRel) between events described in natural language is known to be labor intensive, partly because the total number of TempRels is quadratic in the number of events. As a result, only a small number of documents are typically annotated, limiting the coverage of various lexical/semantic phenomena. In order to improve existing approaches, one possibility is to make use of the readily available, partially annotated data (P as in partial) that cover more documents. However, missing annotations in P are known to hurt, rather than help, existing systems. This work is a case study in exploring various usages of P for TempRel extraction. Results show that despite missing annotations, P is still a useful supervision signal for this task within a constrained bootstrapping learning framework. The system described in this system is publicly available.

  • A Structured Learning Approach to Temporal Relation Extraction
    Emnlp, 2017
    Co-Authors: Qiang Ning, Zhili Feng, Dan Roth
    Abstract:

    Identifying Temporal Relations between events is an essential step towards nat-ural language understanding. However, the Temporal Relation between two events in a story depends on, and is often dic-tated by, Relations among other events. Consequently, effectively identifying tem-poral Relations between events is a chal-lenging problem even for human annota-tors. This paper suggests that it is im-portant to take these dependencies into ac-count while learning to identify these re-lations and proposes a structured learning approach to address this challenge. As a byproduct, this provides a new perspective on handling missing Relations, a known is-sue that hurts existing methods. As we show, the proposed approach results in sig-nificant improvements on the two com-monly used data sets for this problem.