Spatiotemporal Processing

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 11406 Experts worldwide ranked by ideXlab platform

J. D. Dickman - One of the best experts on this subject based on the ideXlab platform.

  • Spatiotemporal Processing of linear acceleration primary afferent and central vestibular neuron responses
    Journal of Neurophysiology, 2000
    Co-Authors: Dora E. Angelaki, J. D. Dickman
    Abstract:

    Spatiotemporal convergence and two-dimensional (2-D) neural tuning have been proposed as a major neural mechanism in the signal Processing of linear acceleration. To examine this hypothesis, we stu...

  • Spatiotemporal Processing of linear acceleration primary afferent and central vestibular neuron responses
    Journal of Neurophysiology, 2000
    Co-Authors: Dora E. Angelaki, J. D. Dickman
    Abstract:

    Spatiotemporal convergence and two-dimensional (2-D) neural tuning have been proposed as a major neural mechanism in the signal Processing of linear acceleration. To examine this hypothesis, we studied the firing properties of primary otolith afferents and central otolith neurons that respond exclusively to horizontal linear accelerations of the head (0.16-10 Hz) in alert rhesus monkeys. Unlike primary afferents, the majority of central otolith neurons exhibited 2-D spatial tuning to linear acceleration. As a result, central otolith dynamics vary as a function of movement direction. During movement along the maximum sensitivity direction, the dynamics of all central otolith neurons differed significantly from those observed for the primary afferent population. Specifically at low frequencies (Spatiotemporal convergence. Neither afferent nor central otolith neurons discriminated between gravitational and inertial components of linear acceleration. Thus response sensitivity was indistinguishable during 0.5-Hz pitch oscillations and fore-aft movements. The fact that otolith-only central neurons with "high-pass" filter properties exhibit semicircular canal-like dynamics during head tilts might have important consequences for the conclusions of previous studies of sensory convergence and sensorimotor transformations in central vestibular neurons.

Dora E. Angelaki - One of the best experts on this subject based on the ideXlab platform.

  • Spatiotemporal Processing of linear acceleration primary afferent and central vestibular neuron responses
    Journal of Neurophysiology, 2000
    Co-Authors: Dora E. Angelaki, J. D. Dickman
    Abstract:

    Spatiotemporal convergence and two-dimensional (2-D) neural tuning have been proposed as a major neural mechanism in the signal Processing of linear acceleration. To examine this hypothesis, we stu...

  • Spatiotemporal Processing of linear acceleration primary afferent and central vestibular neuron responses
    Journal of Neurophysiology, 2000
    Co-Authors: Dora E. Angelaki, J. D. Dickman
    Abstract:

    Spatiotemporal convergence and two-dimensional (2-D) neural tuning have been proposed as a major neural mechanism in the signal Processing of linear acceleration. To examine this hypothesis, we studied the firing properties of primary otolith afferents and central otolith neurons that respond exclusively to horizontal linear accelerations of the head (0.16-10 Hz) in alert rhesus monkeys. Unlike primary afferents, the majority of central otolith neurons exhibited 2-D spatial tuning to linear acceleration. As a result, central otolith dynamics vary as a function of movement direction. During movement along the maximum sensitivity direction, the dynamics of all central otolith neurons differed significantly from those observed for the primary afferent population. Specifically at low frequencies (Spatiotemporal convergence. Neither afferent nor central otolith neurons discriminated between gravitational and inertial components of linear acceleration. Thus response sensitivity was indistinguishable during 0.5-Hz pitch oscillations and fore-aft movements. The fact that otolith-only central neurons with "high-pass" filter properties exhibit semicircular canal-like dynamics during head tilts might have important consequences for the conclusions of previous studies of sensory convergence and sensorimotor transformations in central vestibular neurons.

Ye Tang - One of the best experts on this subject based on the ideXlab platform.

  • liaf net leaky integrate and analog fire network for lightweight and efficient Spatiotemporal information Processing
    IEEE Transactions on Neural Networks, 2021
    Co-Authors: Hehui Zhang, Yihan Lin, Meng Wang, Ye Tang
    Abstract:

    Spiking neural networks (SNNs) based on the leaky integrate and fire (LIF) model have been applied to energy-efficient temporal and Spatiotemporal Processing tasks. Due to the bioplausible neuronal dynamics and simplicity, LIF-SNN benefits from event-driven Processing, however, usually face the embarrassment of reduced performance. This may because, in LIF-SNN, the neurons transmit information via spikes. To address this issue, in this work, we propose a leaky integrate and analog fire (LIAF) neuron model so that analog values can be transmitted among neurons, and a deep network termed LIAF-Net is built on it for efficient Spatiotemporal Processing. In the temporal domain, LIAF follows the traditional LIF dynamics to maintain its temporal Processing capability. In the spatial domain, LIAF is able to integrate spatial information through convolutional integration or fully connected integration. As a Spatiotemporal layer, LIAF can also be used with traditional artificial neural network (ANN) layers jointly. In addition, the built network can be trained with backpropagation through time (BPTT) directly, which avoids the performance loss caused by ANN to SNN conversion. Experiment results indicate that LIAF-Net achieves comparable performance to the gated recurrent unit (GRU) and long short-term memory (LSTM) on bAbI question answering (QA) tasks and achieves state-of-the-art performance on Spatiotemporal dynamic vision sensor (DVS) data sets, including MNIST-DVS, CIFAR10-DVS, and DVS128 Gesture, with much less number of synaptic weights and computational overhead compared with traditional networks built by LSTM, GRU, convolutional LSTM (ConvLSTM), or 3-D convolution (Conv3D). Compared with traditional LIF-SNN, LIAF-Net also shows dramatic accuracy gain on all these experiments. In conclusion, LIAF-Net provides a framework combining the advantages of both ANNs and SNNs for lightweight and efficient Spatiotemporal information Processing.

  • liaf net leaky integrate and analog fire network for lightweight and efficient Spatiotemporal information Processing
    arXiv: Learning, 2020
    Co-Authors: Hehui Zhang, Yihan Lin, Meng Wang, Ye Tang
    Abstract:

    Spiking neural networks (SNNs) based on Leaky Integrate and Fire (LIF) model have been applied to energy-efficient temporal and Spatiotemporal Processing tasks. Thanks to the bio-plausible neuronal dynamics and simplicity, LIF-SNN benefits from event-driven Processing, however, usually faces the embarrassment of reduced performance. This may because in LIF-SNN the neurons transmit information via spikes. To address this issue, in this work, we propose a Leaky Integrate and Analog Fire (LIAF) neuron model, so that analog values can be transmitted among neurons, and a deep network termed as LIAF-Net is built on it for efficient Spatiotemporal Processing. In the temporal domain, LIAF follows the traditional LIF dynamics to maintain its temporal Processing capability. In the spatial domain, LIAF is able to integrate spatial information through convolutional integration or fully-connected integration. As a Spatiotemporal layer, LIAF can also be used with traditional artificial neural network (ANN) layers jointly. Experiment results indicate that LIAF-Net achieves comparable performance to Gated Recurrent Unit (GRU) and Long short-term memory (LSTM) on bAbI Question Answering (QA) tasks, and achieves state-of-the-art performance on Spatiotemporal Dynamic Vision Sensor (DVS) datasets, including MNIST-DVS, CIFAR10-DVS and DVS128 Gesture, with much less number of synaptic weights and computational overhead compared with traditional networks built by LSTM, GRU, Convolutional LSTM (ConvLSTM) or 3D convolution (Conv3D). Compared with traditional LIF-SNN, LIAF-Net also shows dramatic accuracy gain on all these experiments. In conclusion, LIAF-Net provides a framework combining the advantages of both ANNs and SNNs for lightweight and efficient Spatiotemporal information Processing.

  • LIAF-Net: Leaky Integrate and Analog Fire Network for Lightweight and Efficient Spatiotemporal Information Processing
    2020
    Co-Authors: Wu Zhenzhi, Zhang Hehui, Lin Yihan, Li Guoqi, Wang Meng, Ye Tang
    Abstract:

    Spiking neural networks (SNNs) based on Leaky Integrate and Fire (LIF) model have been applied to energy-efficient temporal and Spatiotemporal Processing tasks. Thanks to the bio-plausible neuronal dynamics and simplicity, LIF-SNN benefits from event-driven Processing, however, usually faces the embarrassment of reduced performance. This may because in LIF-SNN the neurons transmit information via spikes. To address this issue, in this work, we propose a Leaky Integrate and Analog Fire (LIAF) neuron model, so that analog values can be transmitted among neurons, and a deep network termed as LIAF-Net is built on it for efficient Spatiotemporal Processing. In the temporal domain, LIAF follows the traditional LIF dynamics to maintain its temporal Processing capability. In the spatial domain, LIAF is able to integrate spatial information through convolutional integration or fully-connected integration. As a Spatiotemporal layer, LIAF can also be used with traditional artificial neural network (ANN) layers jointly. Experiment results indicate that LIAF-Net achieves comparable performance to Gated Recurrent Unit (GRU) and Long short-term memory (LSTM) on bAbI Question Answering (QA) tasks, and achieves state-of-the-art performance on Spatiotemporal Dynamic Vision Sensor (DVS) datasets, including MNIST-DVS, CIFAR10-DVS and DVS128 Gesture, with much less number of synaptic weights and computational overhead compared with traditional networks built by LSTM, GRU, Convolutional LSTM (ConvLSTM) or 3D convolution (Conv3D). Compared with traditional LIF-SNN, LIAF-Net also shows dramatic accuracy gain on all these experiments. In conclusion, LIAF-Net provides a framework combining the advantages of both ANNs and SNNs for lightweight and efficient Spatiotemporal information Processing.Comment: 14 pages, 9 figures, submitted to IEEE Transactions on Neural Networks and Learning System

Andrea Leo - One of the best experts on this subject based on the ideXlab platform.

  • common Spatiotemporal Processing of visual features shapes object representation
    Scientific Reports, 2019
    Co-Authors: Paolo Papale, Monica Betta, Giacomo Handjaras, Giulia Malfatti, Luca Cecchetti, Alessandra Cecilia Rampinini, Pietro Pietrini, Emiliano Ricciardi, Luca Turella, Andrea Leo
    Abstract:

    Biological vision relies on representations of the physical world at different levels of complexity. Relevant features span from simple low-level properties, as contrast and spatial frequencies, to object-based attributes, as shape and category. However, how these features are integrated into coherent percepts is still debated. Moreover, these dimensions often share common biases: for instance, stimuli from the same category (e.g., tools) may have similar shapes. Here, using magnetoencephalography, we revealed the temporal dynamics of feature Processing in human subjects attending to objects from six semantic categories. By employing Relative Weights Analysis, we mitigated collinearity between model-based descriptions of stimuli and showed that low-level properties (contrast and spatial frequencies), shape (medial-axis) and category are represented within the same spatial locations early in time: 100–150 ms after stimulus onset. This fast and overlapping Processing may result from independent parallel computations, with categorical representation emerging later than the onset of low-level feature Processing, yet before shape coding. Categorical information is represented both before and after shape, suggesting a role for this feature in the refinement of categorical matching.

  • common Spatiotemporal Processing of visual features shapes object representation
    bioRxiv, 2018
    Co-Authors: Paolo Papale, Monica Betta, Giacomo Handjaras, Giulia Malfatti, Luca Cecchetti, Alessandra Cecilia Rampinini, Pietro Pietrini, Emiliano Ricciardi, Luca Turella, Andrea Leo
    Abstract:

    Biological vision relies on representations of the physical world at different levels of complexity. Relevant features span from simple low-level properties, as contrast and spatial frequencies, to object-based attributes, as shape and category. However, how these features are integrated into coherent percepts is still debated. Moreover, these dimensions often share common biases: for instance, stimuli from the same category (e.g., tools) may have similar shapes. Here, using magnetoencephalography, we revealed the temporal dynamics of feature Processing in human subjects attending to pictures of items pertaining to different semantic categories. By employing Relative Weights Analysis, we mitigated collinearity between model-based descriptions of stimuli and showed that low-level properties (contrast and spatial frequencies), shape (medial-axis) and category are represented within the same spatial locations early in time: 100-150ms after stimulus onset. This fast and overlapping Processing may result from independent parallel computations, with categorical representation emerging later than the onset of low-level feature Processing, yet before shape coding. Categorical information is represented both before and after shape also suggesting a role for this feature in the refinement of categorical matching.

Paolo Papale - One of the best experts on this subject based on the ideXlab platform.

  • common Spatiotemporal Processing of visual features shapes object representation
    Scientific Reports, 2019
    Co-Authors: Paolo Papale, Monica Betta, Giacomo Handjaras, Giulia Malfatti, Luca Cecchetti, Alessandra Cecilia Rampinini, Pietro Pietrini, Emiliano Ricciardi, Luca Turella, Andrea Leo
    Abstract:

    Biological vision relies on representations of the physical world at different levels of complexity. Relevant features span from simple low-level properties, as contrast and spatial frequencies, to object-based attributes, as shape and category. However, how these features are integrated into coherent percepts is still debated. Moreover, these dimensions often share common biases: for instance, stimuli from the same category (e.g., tools) may have similar shapes. Here, using magnetoencephalography, we revealed the temporal dynamics of feature Processing in human subjects attending to objects from six semantic categories. By employing Relative Weights Analysis, we mitigated collinearity between model-based descriptions of stimuli and showed that low-level properties (contrast and spatial frequencies), shape (medial-axis) and category are represented within the same spatial locations early in time: 100–150 ms after stimulus onset. This fast and overlapping Processing may result from independent parallel computations, with categorical representation emerging later than the onset of low-level feature Processing, yet before shape coding. Categorical information is represented both before and after shape, suggesting a role for this feature in the refinement of categorical matching.

  • common Spatiotemporal Processing of visual features shapes object representation
    bioRxiv, 2018
    Co-Authors: Paolo Papale, Monica Betta, Giacomo Handjaras, Giulia Malfatti, Luca Cecchetti, Alessandra Cecilia Rampinini, Pietro Pietrini, Emiliano Ricciardi, Luca Turella, Andrea Leo
    Abstract:

    Biological vision relies on representations of the physical world at different levels of complexity. Relevant features span from simple low-level properties, as contrast and spatial frequencies, to object-based attributes, as shape and category. However, how these features are integrated into coherent percepts is still debated. Moreover, these dimensions often share common biases: for instance, stimuli from the same category (e.g., tools) may have similar shapes. Here, using magnetoencephalography, we revealed the temporal dynamics of feature Processing in human subjects attending to pictures of items pertaining to different semantic categories. By employing Relative Weights Analysis, we mitigated collinearity between model-based descriptions of stimuli and showed that low-level properties (contrast and spatial frequencies), shape (medial-axis) and category are represented within the same spatial locations early in time: 100-150ms after stimulus onset. This fast and overlapping Processing may result from independent parallel computations, with categorical representation emerging later than the onset of low-level feature Processing, yet before shape coding. Categorical information is represented both before and after shape also suggesting a role for this feature in the refinement of categorical matching.