Parsing Process

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 10452 Experts worldwide ranked by ideXlab platform

Laura A. Inman - One of the best experts on this subject based on the ideXlab platform.

  • The role of the ventral intraparietal area (VIP/pVIP) in the perception of object-motion and self-motion.
    NeuroImage, 2020
    Co-Authors: David T. Field, Nicolò Biagi, Laura A. Inman
    Abstract:

    Retinal image motion is a composite signal that contains information about two behaviourally significant factors: self-motion and the movement of environmental objects. It is thought that the brain separates the two relevant signals, and although multiple brain regions have been identified that respond selectively to the composite optic flow signal, which brain region(s) perform the Parsing Process remains unknown. Here, we present original evidence that the putative human ventral intraparietal area (pVIP), a region known to receive optic flow signals as well as independent self-motion signals from other sensory modalities, plays a critical role in the Parsing Process and acts to isolate object-motion. We localised pVIP using its multisensory response profile, and then tested its relative responses to simulated object-motion and self-motion stimuli; results indicated that responses were much stronger in pVIP to stimuli that specified object-motion. We report two further observations that will be significant for the future direction of research in this area; firstly, activation in pVIP was suppressed by distant stationary objects compared to the absence of objects or closer objects. Secondly, we describe several other brain regions that share with pVIP selectivity for visual object-motion over visual self-motion as well as a multisensory response.

Stephen W. Smoliar - One of the best experts on this subject based on the ideXlab platform.

  • an integrated system for content based video retrieval and browsing
    Pattern Recognition, 1997
    Co-Authors: Hong-jiang Zhang, Jian Hua Wu, Di Zhong, Stephen W. Smoliar
    Abstract:

    This paper presents an integrated system solution for computer assisted video Parsing and content-based video retrieval and browsing. The effectiveness of this solution lies in its use of video content information derived from a Parsing Process, being driven by visual feature analysis. That is, Parsing will temporally segment and abstract a video source, based on low-level image analyses; then retrieval and browsing of video will be based on key-frame, temporal and motion features of shots. These Processes and a set of tools to facilitate content-based video retrieval and browsing using the feature data set are presented in detail as functions of an integrated system.

  • Content-based video browsing tools
    Multimedia Computing and Networking 1995, 1995
    Co-Authors: Hong-jiang Zhang, Stephen W. Smoliar
    Abstract:

    Browsing is important for multimedia content retrieval, editing, authoring and communications. Yet, we are still lacking browsing tools which are user friendly and content-based, at least for video materials. In this paper, we present a set of video browsing tools which utilize video content information resulting from a Parsing Process. Video Parsing algorithms are briefly discussed and a detail description of both sequential and time-space browsing tools are presented.

  • video Parsing retrieval and browsing an integrated and content based solution
    ACM Multimedia, 1995
    Co-Authors: Hong-jiang Zhang, Stephen W. Smoliar, Jian Hua Wu
    Abstract:

    This paper presents an integrated solution for computer assisted video Parsing and content-based video retrieval and browsing. The uniqueness and effectiveness of this solution lies in its use of video content information provided by a Parsing Process driven by visual feature analysis. More specifically, Parsing will temporally segment and abstract a video source, based on low-level image analyses; then retrieval and browsing of video will be based on key-frames selected during abstraction and spatial-temporal variations of visual features, as well as some shot-level semantics derived from camera operation and motion analysis. These Processes, as well as video retrieval and browsing tools, are presented in detail as functions of an integrated system. Also, experimental results on automatic key-frame detection are given.

Hong-jiang Zhang - One of the best experts on this subject based on the ideXlab platform.

  • an integrated system for content based video retrieval and browsing
    Pattern Recognition, 1997
    Co-Authors: Hong-jiang Zhang, Jian Hua Wu, Di Zhong, Stephen W. Smoliar
    Abstract:

    This paper presents an integrated system solution for computer assisted video Parsing and content-based video retrieval and browsing. The effectiveness of this solution lies in its use of video content information derived from a Parsing Process, being driven by visual feature analysis. That is, Parsing will temporally segment and abstract a video source, based on low-level image analyses; then retrieval and browsing of video will be based on key-frame, temporal and motion features of shots. These Processes and a set of tools to facilitate content-based video retrieval and browsing using the feature data set are presented in detail as functions of an integrated system.

  • Content-based video browsing tools
    Multimedia Computing and Networking 1995, 1995
    Co-Authors: Hong-jiang Zhang, Stephen W. Smoliar
    Abstract:

    Browsing is important for multimedia content retrieval, editing, authoring and communications. Yet, we are still lacking browsing tools which are user friendly and content-based, at least for video materials. In this paper, we present a set of video browsing tools which utilize video content information resulting from a Parsing Process. Video Parsing algorithms are briefly discussed and a detail description of both sequential and time-space browsing tools are presented.

  • video Parsing retrieval and browsing an integrated and content based solution
    ACM Multimedia, 1995
    Co-Authors: Hong-jiang Zhang, Stephen W. Smoliar, Jian Hua Wu
    Abstract:

    This paper presents an integrated solution for computer assisted video Parsing and content-based video retrieval and browsing. The uniqueness and effectiveness of this solution lies in its use of video content information provided by a Parsing Process driven by visual feature analysis. More specifically, Parsing will temporally segment and abstract a video source, based on low-level image analyses; then retrieval and browsing of video will be based on key-frames selected during abstraction and spatial-temporal variations of visual features, as well as some shot-level semantics derived from camera operation and motion analysis. These Processes, as well as video retrieval and browsing tools, are presented in detail as functions of an integrated system. Also, experimental results on automatic key-frame detection are given.

David T. Field - One of the best experts on this subject based on the ideXlab platform.

  • The role of the ventral intraparietal area (VIP/pVIP) in the perception of object-motion and self-motion.
    NeuroImage, 2020
    Co-Authors: David T. Field, Nicolò Biagi, Laura A. Inman
    Abstract:

    Retinal image motion is a composite signal that contains information about two behaviourally significant factors: self-motion and the movement of environmental objects. It is thought that the brain separates the two relevant signals, and although multiple brain regions have been identified that respond selectively to the composite optic flow signal, which brain region(s) perform the Parsing Process remains unknown. Here, we present original evidence that the putative human ventral intraparietal area (pVIP), a region known to receive optic flow signals as well as independent self-motion signals from other sensory modalities, plays a critical role in the Parsing Process and acts to isolate object-motion. We localised pVIP using its multisensory response profile, and then tested its relative responses to simulated object-motion and self-motion stimuli; results indicated that responses were much stronger in pVIP to stimuli that specified object-motion. We report two further observations that will be significant for the future direction of research in this area; firstly, activation in pVIP was suppressed by distant stationary objects compared to the absence of objects or closer objects. Secondly, we describe several other brain regions that share with pVIP selectivity for visual object-motion over visual self-motion as well as a multisensory response.

Jian Hua Wu - One of the best experts on this subject based on the ideXlab platform.

  • an integrated system for content based video retrieval and browsing
    Pattern Recognition, 1997
    Co-Authors: Hong-jiang Zhang, Jian Hua Wu, Di Zhong, Stephen W. Smoliar
    Abstract:

    This paper presents an integrated system solution for computer assisted video Parsing and content-based video retrieval and browsing. The effectiveness of this solution lies in its use of video content information derived from a Parsing Process, being driven by visual feature analysis. That is, Parsing will temporally segment and abstract a video source, based on low-level image analyses; then retrieval and browsing of video will be based on key-frame, temporal and motion features of shots. These Processes and a set of tools to facilitate content-based video retrieval and browsing using the feature data set are presented in detail as functions of an integrated system.

  • video Parsing retrieval and browsing an integrated and content based solution
    ACM Multimedia, 1995
    Co-Authors: Hong-jiang Zhang, Stephen W. Smoliar, Jian Hua Wu
    Abstract:

    This paper presents an integrated solution for computer assisted video Parsing and content-based video retrieval and browsing. The uniqueness and effectiveness of this solution lies in its use of video content information provided by a Parsing Process driven by visual feature analysis. More specifically, Parsing will temporally segment and abstract a video source, based on low-level image analyses; then retrieval and browsing of video will be based on key-frames selected during abstraction and spatial-temporal variations of visual features, as well as some shot-level semantics derived from camera operation and motion analysis. These Processes, as well as video retrieval and browsing tools, are presented in detail as functions of an integrated system. Also, experimental results on automatic key-frame detection are given.