The Experts below are selected from a list of 77247 Experts worldwide ranked by ideXlab platform
Le Sun - One of the best experts on this subject based on the ideXlab platform.
-
accurate text enhanced knowledge Graph Representation learning
North American Chapter of the Association for Computational Linguistics, 2018Co-Authors: Bo Chen, Xianpei Han, Le SunAbstract:Previous Representation learning techniques for knowledge Graph Representation usually represent the same entity or relation in different triples with the same Representation, without considering the ambiguity of relations and entities. To appropriately handle the semantic variety of entities/relations in distinct triples, we propose an accurate text-enhanced knowledge Graph Representation learning method, which can represent a relation/entity with different Representations in different triples by exploiting additional textual information. Specifically, our method enhances Representations by exploiting the entity descriptions and triple-specific relation mention. And a mutual attention mechanism between relation mention and entity description is proposed to learn more accurate textual Representations for further improving knowledge Graph Representation. Experimental results show that our method achieves the state-of-the-art performance on both link prediction and triple classification tasks, and significantly outperforms previous text-enhanced knowledge Representation models.
-
NAACL-HLT - Accurate Text-Enhanced Knowledge Graph Representation Learning
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies Volume , 2018Co-Authors: Bo Chen, Xianpei Han, Le SunAbstract:Previous Representation learning techniques for knowledge Graph Representation usually represent the same entity or relation in different triples with the same Representation, without considering the ambiguity of relations and entities. To appropriately handle the semantic variety of entities/relations in distinct triples, we propose an accurate text-enhanced knowledge Graph Representation learning method, which can represent a relation/entity with different Representations in different triples by exploiting additional textual information. Specifically, our method enhances Representations by exploiting the entity descriptions and triple-specific relation mention. And a mutual attention mechanism between relation mention and entity description is proposed to learn more accurate textual Representations for further improving knowledge Graph Representation. Experimental results show that our method achieves the state-of-the-art performance on both link prediction and triple classification tasks, and significantly outperforms previous text-enhanced knowledge Representation models.
Bo Chen - One of the best experts on this subject based on the ideXlab platform.
-
accurate text enhanced knowledge Graph Representation learning
North American Chapter of the Association for Computational Linguistics, 2018Co-Authors: Bo Chen, Xianpei Han, Le SunAbstract:Previous Representation learning techniques for knowledge Graph Representation usually represent the same entity or relation in different triples with the same Representation, without considering the ambiguity of relations and entities. To appropriately handle the semantic variety of entities/relations in distinct triples, we propose an accurate text-enhanced knowledge Graph Representation learning method, which can represent a relation/entity with different Representations in different triples by exploiting additional textual information. Specifically, our method enhances Representations by exploiting the entity descriptions and triple-specific relation mention. And a mutual attention mechanism between relation mention and entity description is proposed to learn more accurate textual Representations for further improving knowledge Graph Representation. Experimental results show that our method achieves the state-of-the-art performance on both link prediction and triple classification tasks, and significantly outperforms previous text-enhanced knowledge Representation models.
-
NAACL-HLT - Accurate Text-Enhanced Knowledge Graph Representation Learning
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies Volume , 2018Co-Authors: Bo Chen, Xianpei Han, Le SunAbstract:Previous Representation learning techniques for knowledge Graph Representation usually represent the same entity or relation in different triples with the same Representation, without considering the ambiguity of relations and entities. To appropriately handle the semantic variety of entities/relations in distinct triples, we propose an accurate text-enhanced knowledge Graph Representation learning method, which can represent a relation/entity with different Representations in different triples by exploiting additional textual information. Specifically, our method enhances Representations by exploiting the entity descriptions and triple-specific relation mention. And a mutual attention mechanism between relation mention and entity description is proposed to learn more accurate textual Representations for further improving knowledge Graph Representation. Experimental results show that our method achieves the state-of-the-art performance on both link prediction and triple classification tasks, and significantly outperforms previous text-enhanced knowledge Representation models.
Qiang Zhou - One of the best experts on this subject based on the ideXlab platform.
-
A Model of Text-Enhanced Knowledge Graph Representation Learning With Mutual Attention
IEEE Access, 2020Co-Authors: Yashen Wang, Huanhuan Zhang, Qiang ZhouAbstract:Recently, it has gained lots of interests to jointly learn the embeddings of knowledge Graph (KG) and text information. However, previous work fails to incorporate the complex structural signals (from structure Representation) and semantic signals (from text Representation). This paper proposes a novel text-enhanced knowledge Graph Representation model, which can utilize textual information to enhance the knowledge Representations. Especially, a mutual attention mechanism between KG and text is proposed to learn more accurate textual Representations for further improving knowledge Graph Representation, within a unified parameter sharing semantic space. Different from conventional joint models, no complicated linguistic analysis or strict alignments between KG and text are required to train our model. Besides, the proposed model could fully incorporate the multi-direction signals. Experimental results show that the proposed model achieves the state-of-the-art performance on both link prediction and triple classification tasks, and significantly outperforms previous text-enhanced knowledge Representation models.
Tom Renard - One of the best experts on this subject based on the ideXlab platform.
-
ICIAR (1) - Interactive segmentation of 3D images using a region adjacency Graph Representation
Lecture Notes in Computer Science, 2011Co-Authors: Ludovic Paulhac, Jean-yves Ramel, Tom RenardAbstract:This paper presents an interactive method for 3D images segmentation. This method is based on a region adjacency Graph Representation that improves and simplifies the segmentation process. This Graph Representation allows the user to easily define some splitting and merging operations which gives the possibility to make an incremental construction of the final segmentation. To validate the interest of the proposed method, our interactive proposition has been integrated into a volumetric texture segmentation process. The obtained results are very satisfactory even in the case of complex volumetric textures. This same system, including the textural features and our interactive proposition, has been manipulated by specialists in sonoGraphy to segment 3D ultrasound images of the skin. Some examples of segmentation are presented to illustrate the interactivity of our approach.
Xianpei Han - One of the best experts on this subject based on the ideXlab platform.
-
accurate text enhanced knowledge Graph Representation learning
North American Chapter of the Association for Computational Linguistics, 2018Co-Authors: Bo Chen, Xianpei Han, Le SunAbstract:Previous Representation learning techniques for knowledge Graph Representation usually represent the same entity or relation in different triples with the same Representation, without considering the ambiguity of relations and entities. To appropriately handle the semantic variety of entities/relations in distinct triples, we propose an accurate text-enhanced knowledge Graph Representation learning method, which can represent a relation/entity with different Representations in different triples by exploiting additional textual information. Specifically, our method enhances Representations by exploiting the entity descriptions and triple-specific relation mention. And a mutual attention mechanism between relation mention and entity description is proposed to learn more accurate textual Representations for further improving knowledge Graph Representation. Experimental results show that our method achieves the state-of-the-art performance on both link prediction and triple classification tasks, and significantly outperforms previous text-enhanced knowledge Representation models.
-
NAACL-HLT - Accurate Text-Enhanced Knowledge Graph Representation Learning
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies Volume , 2018Co-Authors: Bo Chen, Xianpei Han, Le SunAbstract:Previous Representation learning techniques for knowledge Graph Representation usually represent the same entity or relation in different triples with the same Representation, without considering the ambiguity of relations and entities. To appropriately handle the semantic variety of entities/relations in distinct triples, we propose an accurate text-enhanced knowledge Graph Representation learning method, which can represent a relation/entity with different Representations in different triples by exploiting additional textual information. Specifically, our method enhances Representations by exploiting the entity descriptions and triple-specific relation mention. And a mutual attention mechanism between relation mention and entity description is proposed to learn more accurate textual Representations for further improving knowledge Graph Representation. Experimental results show that our method achieves the state-of-the-art performance on both link prediction and triple classification tasks, and significantly outperforms previous text-enhanced knowledge Representation models.