Temporal Context

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 360 Experts worldwide ranked by ideXlab platform

Motohiro Kimura - One of the best experts on this subject based on the ideXlab platform.

  • unintentional Temporal Context based prediction of emotional faces an electrophysiological study
    Cerebral Cortex, 2012
    Co-Authors: Motohiro Kimura, Haruka Kondo, Hideki Ohira, Erich Schroger
    Abstract:

    The ability to extract sequential regularities embedded in the Temporal Context or Temporal structure of sensory events and to predict upcoming events based on the extracted sequential regularities plays a central role in human cognition. In the present study, we demonstrate that, without any intention, upcoming emotional faces can be predicted based on sequential regularities, by showing that prediction error responses as reflected by visual mismatch negativity (MMN), an event-related brain potential (ERP) component, were evoked in response to emotional faces that violated a regular alternation pattern of 2 emotional faces (fearful and happy faces) under a situation where the emotional faces themselves were unrelated to the participant's task. Face-inversion and negative-bias effects in the visual MMN further indicated the involvement of holistic face representations. In addition, through successive source analyses of the visual MMN, it was revealed that the prediction error responses were composed of activations mainly in the face-responsible visual extrastriate areas and the prefrontal areas. The present results provide primary evidence for the existence of the unintentional Temporal Context-based prediction of biologically relevant visual stimuli as well as empirical support for the major engagement of the visual and prefrontal areas in unintentional Temporal Context-based prediction in vision.

  • visual mismatch negativity and unintentional Temporal Context based prediction in vision
    International Journal of Psychophysiology, 2012
    Co-Authors: Motohiro Kimura
    Abstract:

    Abstract Since the discovery of an event-related brain potential (ERP) component, auditory mismatch negativity (auditory MMN), there has been a long-lasting debate regarding the existence of its counterparts in other sensory modalities. Over the past few decades, several studies have confirmed the existence of mismatch negativity in the visual modality (visual MMN) and have revealed the various characteristics of visual MMN. In the present review, a full range of visual MMN studies are overviewed from the perspective of the predictive framework of visual MMN recently proposed by Kimura et al. (2011b). In the first half, the nature of visual MMN is reviewed in terms of (1) typical paradigm and morphologies, (2) underlying processes, (3) neural generators, and (4) functional significance. The main message in this part is that visual MMN is closely associated with the unintentional prediction of forthcoming visual sensory events on the basis of abstract sequential rules embedded in the Temporal Context of visual stimulation (i.e., “unintentional Temporal-Context-based prediction in vision”). In the second half, the nature of the unintentional prediction is discussed in terms of (1) behavioral indicators, (2) cognitive properties, and (3) neural substrates and mechanisms. As the main message in this part, I put forward a hypothetical model, which suggests that the unintentional prediction might be implemented by a bi-directional cortical network that includes the visual and prefrontal areas.

Zhen Lei - One of the best experts on this subject based on the ideXlab platform.

  • robust online learned spatio Temporal Context model for visual tracking
    IEEE Transactions on Image Processing, 2014
    Co-Authors: Longyin Wen, Zhaowei Cai, Zhen Lei
    Abstract:

    Visual tracking is an important but challenging problem in the computer vision field. In the real world, the appearances of the target and its surroundings change continuously over space and time, which provides effective information to track the target robustly. However, enough attention has not been paid to the spatio-Temporal appearance information in previous works. In this paper, a robust spatio-Temporal Context model based tracker is presented to complete the tracking task in unconstrained environments. The tracker is constructed with Temporal and spatial appearance Context models. The Temporal appearance Context model captures the historical appearance of the target to prevent the tracker from drifting to the background in a long-term tracking. The spatial appearance Context model integrates contributors to build a supporting field. The contributors are the patches with the same size of the target at the key-points automatically discovered around the target. The constructed supporting field provides much more information than the appearance of the target itself, and thus, ensures the robustness of the tracker in complex environments. Extensive experiments on various challenging databases validate the superiority of our tracker over other state-of-the-art trackers.

Kaihua Zhang - One of the best experts on this subject based on the ideXlab platform.

  • visual tracking with weighted adaptive local sparse appearance model via spatio Temporal Context learning
    IEEE Transactions on Image Processing, 2018
    Co-Authors: Jie Zhang, Kaihua Zhang
    Abstract:

    Sparse representation has been widely exploited to develop an effective appearance model for object tracking due to its well discriminative capability in distinguishing the target from its surrounding background. However, most of these methods only consider either the holistic representation or the local one for each patch with equal importance, and hence may fail when the target suffers from severe occlusion or large-scale pose variation. In this paper, we propose a simple yet effective approach that exploits rich feature information from reliable patches based on weighted local sparse representation that takes into account the importance of each patch. Specifically, we design a reconstruction-error based weight function with the reconstruction error of each patch via sparse coding to measure the patch reliability. Moreover, we explore spatio-Temporal Context information to enhance the robustness of the appearance model, in which the global Temporal Context is learned via incremental subspace and sparse representation learning with a novel dynamic template update strategy to update the dictionary, while the local spatial Context considers the correlation between the target and its surrounding background via measuring the similarity among their sparse coefficients. Extensive experimental evaluations on two large tracking benchmarks demonstrate favorable performance of the proposed method over some state-of-the-art trackers.

  • fast visual tracking via dense spatio Temporal Context learning
    European Conference on Computer Vision, 2014
    Co-Authors: Kaihua Zhang, Lei Zhang, David Zhang, Qingshan Liu, Minghsuan Yang
    Abstract:

    In this paper, we present a simple yet fast and robust algorithm which exploits the dense spatio-Temporal Context for visual tracking. Our approach formulates the spatio-Temporal relationships between the object of interest and its locally dense Contexts in a Bayesian framework, which models the statistical correlation between the simple low-level features (i.e., image intensity and position) from the target and its surrounding regions. The tracking problem is then posed by computing a confidence map which takes into account the prior information of the target location and thereby alleviates target location ambiguity effectively. We further propose a novel explicit scale adaptation scheme, which is able to deal with target scale variations efficiently and effectively. The Fast Fourier Transform (FFT) is adopted for fast learning and detection in this work, which only needs 4 FFT operations. Implemented in MATLAB without code optimization, the proposed tracker runs at 350 frames per second on an i7 machine. Extensive experimental results show that the proposed algorithm performs favorably against state-of-the-art methods in terms of efficiency, accuracy and robustness.

  • fast tracking via spatio Temporal Context learning
    arXiv: Computer Vision and Pattern Recognition, 2013
    Co-Authors: Kaihua Zhang, Lei Zhang, Minghsuan Yang, David Zhang
    Abstract:

    In this paper, we present a simple yet fast and robust algorithm which exploits the spatio-Temporal Context for visual tracking. Our approach formulates the spatio-Temporal relationships between the object of interest and its local Context based on a Bayesian framework, which models the statistical correlation between the low-level features (i.e., image intensity and position) from the target and its surrounding regions. The tracking problem is posed by computing a confidence map, and obtaining the best target location by maximizing an object location likelihood function. The Fast Fourier Transform is adopted for fast learning and detection in this work. Implemented in MATLAB without code optimization, the proposed tracker runs at 350 frames per second on an i7 machine. Extensive experimental results show that the proposed algorithm performs favorably against state-of-the-art methods in terms of efficiency, accuracy and robustness.

Per B. Sederberg - One of the best experts on this subject based on the ideXlab platform.

  • A Temporal Context repetition effect in rats during a novel object recognition memory task
    Animal Cognition, 2015
    Co-Authors: Joseph R. Manns, Claire R. Galloway, Per B. Sederberg
    Abstract:

    Recent research in humans has used formal models of Temporal Context, broadly defined as a lingering representation of recent experience, to explain a wide array of recall and recognition memory phenomena. One difficulty in extending this work to studies of experimental animals has been the challenge of developing a task to test Temporal Context effects on performance in rodents. The current study presents results from a novel object recognition memory paradigm that was adapted from a task used in humans and demonstrates a Temporal Context repetition effect in rats. Specifically, the findings indicate that repeating the first two objects from a once-encountered sequence of three objects incidentally cues memory for the third object, even in its absence. These results reveal that Temporal Context influences item memory in rats similar to the manner in which it influences memory in humans and also highlight a new task for future studies of Temporal Context in experimental animals.

  • the successor representation and Temporal Context
    Neural Computation, 2012
    Co-Authors: Samuel J Gershman, Christopher D Moore, Michael T Todd, Kenneth A Norman, Per B. Sederberg
    Abstract:

    The successor representation was introduced into reinforcement learning by Dayan (1993) as a means of facilitating generalization between states with similar successors. Although reinforcement learning in general has been used extensively as a model of psychological and neural processes, the psychological validity of the successor representation has yet to be explored. An interesting possibility is that the successor representation can be used not only for reinforcement learning but for episodic learning as well. Our main contribution is to show that a variant of the Temporal Context model (TCM; Howard & Kahana, 2002), an influential model of episodic memory, can be understood as directly estimating the successor representation using the Temporal difference learning algorithm (Sutton & Barto, 1998). This insight leads to a generalization of TCM and new experimental predictions. In addition to casting a new normative light on TCM, this equivalence suggests a previously unexplored point of contact between different learning systems.

  • scene representations in parahippocampal cortex depend on Temporal Context
    The Journal of Neuroscience, 2012
    Co-Authors: Nicholas B Turkbrowne, Mason G Simon, Per B. Sederberg
    Abstract:

    Human perception is supported by regions of ventral visual cortex that become active when specific types of information appear in the environment. This coupling has led to a common assumption in cognitive neuroscience that stimulus-evoked activity in these regions only reflects information about the current stimulus. Here we challenge this assumption for how scenes are represented in a scene-selective region of parahippocampal cortex. This region treated two identical scenes as more similar when they were preceded in time by the same stimuli compared to when they were preceded by different stimuli. These findings suggest that parahippocampal cortex embeds scenes in their Temporal Context to determine what they represent. By integrating the past and present, such representations may support the encoding and navigation of complex environments.

  • Human memory reconsolidation can be explained using the Temporal Context model
    Psychonomic Bulletin & Review, 2011
    Co-Authors: Per B. Sederberg, Samuel J Gershman, Sean M. Polyn, Kenneth A Norman
    Abstract:

    Recent work by Hupbach, Gomez, Hardt, and Nadel ( Learning & Memory, 14 , 47–53, 2007 ) and Hupbach, Gomez, and Nadel ( Memory, 17 , 502–510, 2009 ) suggests that episodic memory for a previously studied list can be updated to include new items, if participants are reminded of the earlier list just prior to learning a new list. The key finding from the Hupbach studies was an asymmetric pattern of intrusions, whereby participants intruded numerous items from the second list when trying to recall the first list, but not viceversa. Hupbach et al. ( 2007 ; 2009 ) explained this pattern in terms of a cellular reconsolidation process, whereby first-list memory is rendered labile by the reminder and the labile memory is then updated to include items from the second list. Here, we show that the Temporal Context model of memory, which lacks a cellular reconsolidation process, can account for the asymmetric intrusion effect, using well-established principles of Contextual reinstatement and item–Context binding.

Liang Lin - One of the best experts on this subject based on the ideXlab platform.

  • unifying Temporal Context and multi feature with update pacing framework for visual tracking
    IEEE Transactions on Circuits and Systems for Video Technology, 2020
    Co-Authors: Yuefang Gao, Henry Wing Fung Yeung, Yuk Ying Chung, Xuhong Tian, Liang Lin
    Abstract:

    Model drifting is one of the knotty problems that seriously restricts the accuracy of discriminative trackers in visual tracking. Most existing works usually focus on improving the robustness of the target appearance model. However, they are prone to suffer from model drifting due to the inappropriate model updates during the tracking-by-detection. In this paper, we propose a novel update-pacing framework to suppress the occurrence of model drifting in visual tracking. Specifically, the proposed framework first initializes an ensemble of trackers, each of which updates the model in a different update interval. Once the forward tracking trajectory of each tracker is determined, the backward trajectory will also be generated by the current model to measure the difference with the forward one, and the tracker with the smallest deviation score will be selected as the most robust tracker for the remaining tracking. By performing such self-examination on trajectory pairs, the framework can effectively preserve the Temporal Context consistency of sequential frames to avoid learning corrupted information. To further improve the performance of the proposed method, a multi-feature extension framework is also proposed to incorporate multiple features into the ensemble of the trackers. The extensive experimental results obtained on large-scale object tracking benchmarks demonstrate that the proposed framework significantly increases the accuracy and robustness of the underlying base trackers, such as DSST, Struck, KCF, and CT, and achieves superior performance compared with the state-of-the-art methods without using deep models.

  • Integrating Spatio-Temporal Context With Multiview Representation for Object Recognition in Visual Surveillance
    IEEE Transactions on Circuits and Systems for Video Technology, 2011
    Co-Authors: Xiaobai Liu, Shuicheng Yan, Hai Jin, Liang Lin, Wenbing Tao
    Abstract:

    We present in this paper an integrated solution to rapidly recognizing dynamic objects in surveillance videos by exploring various Contextual information. This solution consists of three components. The first one is a multi-view object representation. It contains a set of deformable object templates, each of which comprises an ensemble of active features for an object category in a specific view/pose. The template can be efficiently learned via a small set of roughly aligned positive samples without negative samples. The second component is a unified spatio-Temporal Context model, which integrates two types of Contextual information in a Bayesian way. One is the spatial Context, including main surface property (constraints on object type and density) and camera geometric parameters (constraints on object size at a specific location). The other is the Temporal Context, containing the pixel-level and instance-level consistency models, used to generate the foreground probability map and local object trajectory prediction. We also combine the above spatial and Temporal Contextual information to estimate the object pose in scene and use it as a strong prior for inference. The third component is a robust sampling-based inference procedure. Taking the spatio-Temporal Contextual knowledge as the prior model and deformable template matching as the likelihood model, we formulate the problem of object category recognition as a maximum-a-posteriori problem. The probabilistic inference can be achieved by a simple Markov chain Mento Carlo sampler, owing to the informative spatio-Temporal Context model which is able to greatly reduce the computation complexity and the category ambiguities. The system performance and benefit gain from the spatio-Temporal Contextual information are quantitatively evaluated on several challenging datasets and the comparison results clearly demonstrate that our proposed algorithm outperforms other state-of-the-art algorithms.