Facilitate Comprehension

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 309 Experts worldwide ranked by ideXlab platform

Chien Chin Chen - One of the best experts on this subject based on the ideXlab platform.

  • SPIRIT: A Tree Kernel-Based Method for Topic Person Interaction Detection (Extended Abstract)
    2017 IEEE 33rd International Conference on Data Engineering (ICDE), 2017
    Co-Authors: Yung-chun Chang, Chien Chin Chen
    Abstract:

    In this paper, we investigate the interactions between topic persons to help readers construct the background knowledge of a topic. We proposed a rich interactive tree structure to represent syntactic, context, and semantic information of text, and this structure is incorporated into a tree-based convolution kernel to identify segments that convey person interactions and further construct person interaction networks. Empirical evaluations demonstrate that the proposed method is effective in detecting and extracting the interactions between topic persons in the text, and outperforms other extraction approaches used for comparison. Furthermore, readers will be able to easily navigate through the topic persons of interest within the interaction networks, and further construct the background knowledge of the topic to Facilitate Comprehension.

  • ICDE - SPIRIT: A Tree Kernel-Based Method for Topic Person Interaction Detection (Extended Abstract)
    2017 IEEE 33rd International Conference on Data Engineering (ICDE), 2017
    Co-Authors: Yung-chun Chang, Chien Chin Chen
    Abstract:

    In this paper, we investigate the interactions between topic persons to help readers construct the background knowledge of a topic. We proposed a rich interactive tree structure to represent syntactic, context, and semantic information of text, and this structure is incorporated into a tree-based convolution kernel to identify segments that convey person interactions and further construct person interaction networks. Empirical evaluations demonstrate that the proposed method is effective in detecting and extracting the interactions between topic persons in the text, and outperforms other extraction approaches used for comparison. Furthermore, readers will be able to easily navigate through the topic persons of interest within the interaction networks, and further construct the background knowledge of the topic to Facilitate Comprehension.

  • AIRS - A Composite Kernel Approach for Detecting Interactive Segments in Chinese Topic Documents
    Information Retrieval Technology, 2013
    Co-Authors: Yung-chun Chang, Chien Chin Chen
    Abstract:

    Discovering the interactions between persons mentioned in a set of topic documents can help readers construct the background of a topic and Facilitate Comprehension. In this paper, we propose a rich interactive tree structure to represent syntactic, content, and semantic information in text. We also present a composite kernel classification method that integrates the tree structure with a bigram kernel to identify text segments that mention person interactions in topic documents. Empirical evaluations demonstrate that the proposed tree structure and bigram kernel are effective and the composite kernel approach outperforms well-known relation extraction and PPI methods.

Sophie K Scott - One of the best experts on this subject based on the ideXlab platform.

  • speech Comprehension aided by multiple modalities behavioural and neural interactions
    Neuropsychologia, 2012
    Co-Authors: Carolyn Mcgettigan, Andrew Faulkner, Irene Altarelli, Harriet Baverstock, Jonas Obleser, Sophie K Scott
    Abstract:

    Speech Comprehension is a complex human skill, the performance of which requires the perceiver to combine information from several sources - e.g. voice, face, gesture, linguistic context - to achieve an intelligible and interpretable percept. We describe a functional imaging investigation of how auditory, visual and linguistic information interact to Facilitate Comprehension. Our specific aims were to investigate the neural responses to these different information sources, alone and in interaction, and further to use behavioural speech Comprehension scores to address sites of intelligibility-related activation in multifactorial speech Comprehension. In fMRI, participants passively watched videos of spoken sentences, in which we varied Auditory Clarity (with noise-vocoding), Visual Clarity (with Gaussian blurring) and Linguistic Predictability. Main effects of enhanced signal with increased auditory and visual clarity were observed in overlapping regions of posterior STS. Two-way interactions of the factors (auditory × visual, auditory × predictability) in the neural data were observed outside temporal cortex, where positive signal change in response to clearer facial information and greater semantic predictability was greatest at intermediate levels of auditory clarity. Overall changes in stimulus intelligibility by condition (as determined using an independent behavioural experiment) were reflected in the neural data by increased activation predominantly in bilateral dorsolateral temporal cortex, as well as inferior frontal cortex and left fusiform gyrus. Specific investigation of intelligibility changes at intermediate auditory clarity revealed a set of regions, including posterior STS and fusiform gyrus, showing enhanced responses to both visual and linguistic information. Finally, an individual differences analysis showed that greater Comprehension performance in the scanning participants (measured in a post-scan behavioural test) were associated with increased activation in left inferior frontal gyrus and left posterior STS. The current multimodal speech Comprehension paradigm demonstrates recruitment of a wide Comprehension network in the brain, in which posterior STS and fusiform gyrus form sites for convergence of auditory, visual and linguistic information, while left-dominant sites in temporal and frontal cortex support successful Comprehension.

  • Going beyond the information given: a neural system supporting semantic interpretation.
    NeuroImage, 2003
    Co-Authors: Sophie K Scott, Alexander P. Leff, Richard J. S. Wise
    Abstract:

    Relating the meaning of a word to the context in which it is encountered is central to Comprehension. We investigated the neural basis of this process. Subjects made decisions based on a semantic property of single nouns. The lack of sentence context created ambiguity, as nouns may have several, unrelated semantic identities. Contrasted with unambiguous decisions about each noun’s sound structure, the semantic task resulted in activity in the left superior frontal gyrus (SFG), activity that was dependent on choice reaction time. This identified the left SFG as an executive component of a distributed cognitive system that relates a word’s meaning to its semantic context to Facilitate Comprehension.

Carolyn Mcgettigan - One of the best experts on this subject based on the ideXlab platform.

  • speech Comprehension aided by multiple modalities behavioural and neural interactions
    Neuropsychologia, 2012
    Co-Authors: Carolyn Mcgettigan, Andrew Faulkner, Irene Altarelli, Harriet Baverstock, Jonas Obleser, Sophie K Scott
    Abstract:

    Speech Comprehension is a complex human skill, the performance of which requires the perceiver to combine information from several sources - e.g. voice, face, gesture, linguistic context - to achieve an intelligible and interpretable percept. We describe a functional imaging investigation of how auditory, visual and linguistic information interact to Facilitate Comprehension. Our specific aims were to investigate the neural responses to these different information sources, alone and in interaction, and further to use behavioural speech Comprehension scores to address sites of intelligibility-related activation in multifactorial speech Comprehension. In fMRI, participants passively watched videos of spoken sentences, in which we varied Auditory Clarity (with noise-vocoding), Visual Clarity (with Gaussian blurring) and Linguistic Predictability. Main effects of enhanced signal with increased auditory and visual clarity were observed in overlapping regions of posterior STS. Two-way interactions of the factors (auditory × visual, auditory × predictability) in the neural data were observed outside temporal cortex, where positive signal change in response to clearer facial information and greater semantic predictability was greatest at intermediate levels of auditory clarity. Overall changes in stimulus intelligibility by condition (as determined using an independent behavioural experiment) were reflected in the neural data by increased activation predominantly in bilateral dorsolateral temporal cortex, as well as inferior frontal cortex and left fusiform gyrus. Specific investigation of intelligibility changes at intermediate auditory clarity revealed a set of regions, including posterior STS and fusiform gyrus, showing enhanced responses to both visual and linguistic information. Finally, an individual differences analysis showed that greater Comprehension performance in the scanning participants (measured in a post-scan behavioural test) were associated with increased activation in left inferior frontal gyrus and left posterior STS. The current multimodal speech Comprehension paradigm demonstrates recruitment of a wide Comprehension network in the brain, in which posterior STS and fusiform gyrus form sites for convergence of auditory, visual and linguistic information, while left-dominant sites in temporal and frontal cortex support successful Comprehension.

  • Lexical information drives perceptual learning of distorted speech: evidence from the Comprehension of noise-vocoded sentences
    Journal of Experimental Psychology, 2005
    Co-Authors: Carolyn Mcgettigan
    Abstract:

    Speech Comprehension is resistant to acoustic distortion in the input, reflecting listeners' ability to adjust perceptual processes to match the speech input. For noise-vocoded sentences, a manipulation that removes spectral detail from speech, listeners' reporting improved from near 0% to 70% correct over 30 sentences (Experiment 1). Learning was enhanced if listeners heard distorted sentences while they knew the identity of the undistorted target (Experiments 2 and 3). Learning was absent when listeners were trained with nonword sentences (Experiments 4 and 5), although the meaning of the training sentences did not affect learning (Experiment 5). Perceptual learning of noise-vocoded speech depends on higher level information, consistent with top-down, lexically driven learning. Similar processes may Facilitate Comprehension of speech in an unfamiliar accent or following cochlear implantation.

Yung-chun Chang - One of the best experts on this subject based on the ideXlab platform.

  • SPIRIT: A Tree Kernel-Based Method for Topic Person Interaction Detection (Extended Abstract)
    2017 IEEE 33rd International Conference on Data Engineering (ICDE), 2017
    Co-Authors: Yung-chun Chang, Chien Chin Chen
    Abstract:

    In this paper, we investigate the interactions between topic persons to help readers construct the background knowledge of a topic. We proposed a rich interactive tree structure to represent syntactic, context, and semantic information of text, and this structure is incorporated into a tree-based convolution kernel to identify segments that convey person interactions and further construct person interaction networks. Empirical evaluations demonstrate that the proposed method is effective in detecting and extracting the interactions between topic persons in the text, and outperforms other extraction approaches used for comparison. Furthermore, readers will be able to easily navigate through the topic persons of interest within the interaction networks, and further construct the background knowledge of the topic to Facilitate Comprehension.

  • ICDE - SPIRIT: A Tree Kernel-Based Method for Topic Person Interaction Detection (Extended Abstract)
    2017 IEEE 33rd International Conference on Data Engineering (ICDE), 2017
    Co-Authors: Yung-chun Chang, Chien Chin Chen
    Abstract:

    In this paper, we investigate the interactions between topic persons to help readers construct the background knowledge of a topic. We proposed a rich interactive tree structure to represent syntactic, context, and semantic information of text, and this structure is incorporated into a tree-based convolution kernel to identify segments that convey person interactions and further construct person interaction networks. Empirical evaluations demonstrate that the proposed method is effective in detecting and extracting the interactions between topic persons in the text, and outperforms other extraction approaches used for comparison. Furthermore, readers will be able to easily navigate through the topic persons of interest within the interaction networks, and further construct the background knowledge of the topic to Facilitate Comprehension.

  • AIRS - A Composite Kernel Approach for Detecting Interactive Segments in Chinese Topic Documents
    Information Retrieval Technology, 2013
    Co-Authors: Yung-chun Chang, Chien Chin Chen
    Abstract:

    Discovering the interactions between persons mentioned in a set of topic documents can help readers construct the background of a topic and Facilitate Comprehension. In this paper, we propose a rich interactive tree structure to represent syntactic, content, and semantic information in text. We also present a composite kernel classification method that integrates the tree structure with a bigram kernel to identify text segments that mention person interactions in topic documents. Empirical evaluations demonstrate that the proposed tree structure and bigram kernel are effective and the composite kernel approach outperforms well-known relation extraction and PPI methods.

P Henderson - One of the best experts on this subject based on the ideXlab platform.

  • do prequestioning techniques Facilitate Comprehension of french video
    French Review, 1999
    Co-Authors: Carol Herron, C Corrie, S P Cole, P Henderson
    Abstract:

    This study investigates how to Facilitate Comprehension of foreign language video. It tests the results of a previous study whose data suggested that declarative and interrogative advance organizers were equally effective in enhancing recall of French videos. The findings of the present study do not support that conclusion. Subjects in the current research were 26 students enrolled in two classes of a French course (Fr 102). By modifying certain elements of the previous research design, these investigators found that the AO interrogative condition aided Comprehension more than the declarative one, Differences are discussed in light of cognitive processing theory.

  • do prequestioning techniques Facilitate Comprehension
    1999
    Co-Authors: Carol Herron, C Corrie, S P Cole, P Henderson
    Abstract:

    A SIGNIFICANT AMOUNT of research on how to Facilitate Comprehension of a listening passage suggests that prelistening activities (advance organizers) that provide background information about the passage enhance understanding of the text. In the majority of advance organizer studies, subjects listened to text materials introduced with various kinds of contextual support such as pictures, drawings, questions (cf. Omaggio Hadley 125-61 for a review of how the use of background knowledge affects Comprehension of L1 and L2 listening passages). Certain foreign language researchers, however, have recently turned their attention to Comprehension of video materials. Video permits learners to hear and witness authentic linguistic and cultural interactions between native speakers, and it is a medium with which students are very familiar. Swaffar and Vlatten point out that "as a multisensory medium, video offers students more than listening Comprehension: Students have the opportunity to read visual as well as auditory messages" (175). The current study assesses how important it is for students to have a description or preview of a video text before attempting to understand it. Even though the popularity of video in the foreign language classroom continues to increase, a paucity of research exists that investigates how it should best be introduced to students. Herron found that the use of an advance organizer, in this case a brief description of major scenes in the upcoming video, enhanced students' retention of the French video significantly more than viewing the video alone with no prior introduction (193-96). Two classroom studies followed that compared the effects of different advance organizers on students' understanding of French video (Herron, Hanley, and Cole 387-95; Herron, Cole, York, and Linden 23747). In Herron et al. (1995), students retained significantly more information from the videos when the videos were introduced with declarative statements accompanied by pictures than they did with declarative statements only (392-94). The pictures apparently provided an important elaboration of the declarative statements in the advance organizer which in