Natural Vision

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 73125 Experts worldwide ranked by ideXlab platform

Jack L Gallant - One of the best experts on this subject based on the ideXlab platform.

  • Natural scene statistics account for the representation of scene categories in human visual cortex
    Neuron, 2013
    Co-Authors: Dustin Stansbury, Jack L Gallant, Thomas Naselaris
    Abstract:

    SUMMARY During Natural Vision, humans categorize the scenes they encounter: an office, the beach, and so on. These categories are informed by knowledge of the way that objects co-occur in Natural scenes. How does the human brain aggregate information about objects to represent scene categories? To explore this issue, we used statistical learning methods to learn categories that objectively capture the co-occurrence statistics of objects in a large collection of Natural scenes. Using the learned categories, we modeled fMRI brain signals evoked in human subjects when viewing images of scenes. We find that evoked activity across much of anterior visual cortex is explained by the learned categories. Furthermore, a decoder based on these scene categories accurately predicts the categories and objects comprising novel scenes from brain activity evoked by those scenes. These results suggest that the human brain represents scene categories that capture the co-occurrence statistics of objects in the world.

  • attention during Natural Vision warps semantic representation across the human brain
    Nature Neuroscience, 2013
    Co-Authors: Tolga Çukur, Shinji Nishimoto, Alexander G Huth, Jack L Gallant
    Abstract:

    The authors use functional magnetic resonance imaging to measure how the semantic representation changes when searching for different object categories in Natural movies. They find tuning shifts that expand the representation of the attended category and of semantically related, but unattended, categories, and compress the representation of categories semantically dissimilar to the target.

  • attention during Natural Vision warps semantic representation across the human brain
    Nature Neuroscience, 2013
    Co-Authors: Tolga Çukur, Shinji Nishimoto, Alexander G Huth, Jack L Gallant
    Abstract:

    Little is known about how attention changes the cortical representation of sensory information in humans. On the basis of neurophysiological evidence, we hypothesized that attention causes tuning changes to expand the representation of attended stimuli at the cost of unattended stimuli. To investigate this issue, we used functional magnetic resonance imaging to measure how semantic representation changed during visual search for different object categories in Natural movies. We found that many voxels across occipito-temporal and fronto-parietal cortex shifted their tuning toward the attended category. These tuning shifts expanded the representation of the attended category and of semantically related, but unattended, categories, and compressed the representation of categories that were semantically dissimilar to the target. Attentional warping of semantic representation occurred even when the attended category was not present in the movie; thus, the effect was not a target-detection artifact. These results suggest that attention dynamically alters visual representation to optimize processing of behaviorally relevant objects during Natural Vision.

  • neural representation of Natural images in visual area v2
    The Journal of Neuroscience, 2010
    Co-Authors: Ben D B Willmore, Ryan J Prenger, Jack L Gallant
    Abstract:

    Area V2 is a major visual processing stage in mammalian visual cortex, but little is currently known about how V2 encodes information during Natural Vision. To determine how V2 represents Natural images, we used a novel nonlinear system identification approach to obtain quantitative estimates of spatial tuning across a large sample of V2 neurons. We compared these tuning estimates with those obtained in area V1, in which the neural code is relatively well understood. We find two subpopulations of neurons in V2. Approximately one-half of the V2 neurons have tuning that is similar to V1. The other half of the V2 neurons are selective for complex features such as those that occur in Natural scenes. These neurons are distinguished from V1 neurons mainly by the presence of stronger suppressive tuning. Selectivity in these neurons therefore reflects a balance between excitatory and suppressive tuning for specific features. These results provide a new perspective on how complex shape selectivity arises, emphasizing the role of suppressive tuning in determining stimulus selectivity in higher visual cortex.

  • attention to stimulus features shifts spectral tuning of v4 neurons during Natural Vision
    Neuron, 2008
    Co-Authors: Stephen V David, Benjamin Y Hayden, James A Mazer, Jack L Gallant
    Abstract:

    Previous neurophysiological studies suggest that attention can alter the baseline or gain of neurons in extrastriate visual areas but that it cannot change tuning. This suggests that neurons in visual cortex function as labeled lines whose meaning does not depend on task demands. To test this common assumption, we used a system identification approach to measure spatial frequency and orientation tuning in area V4 during two attentionally demanding visual search tasks, one that required fixation and one that allowed free viewing during search. We found that spatial attention modulates response baseline and gain but does not alter tuning, consistent with previous reports. In contrast, feature-based attention often shifts neuronal tuning. These tuning shifts are inconsistent with the labeled-line model and tend to enhance responses to stimulus features that distinguish the search target. Our data suggest that V4 neurons behave as matched filters that are dynamically tuned to optimize visual search.

Daniel Palanker - One of the best experts on this subject based on the ideXlab platform.

  • decoding network mediated retinal response to electrical stimulation implications for fidelity of prosthetic Vision
    Journal of Neural Engineering, 2020
    Co-Authors: Alexander Shmakov, Daniel Palanker
    Abstract:

    Objective. Patients with photovoltaic subretinal implant PRIMA demonstrated letter acuity ~0.1 logMAR worse than sampling limit for 100μm pixels (1.3 logMAR) and performed slower than healthy subjects tested with equivalently pixelated images. To explore the underlying differences between Natural and prosthetic Vision, we compare the fidelity of retinal response to visual and subretinal electrical stimulation through single-cell modeling and ensemble decoding. Approach. Responses of retinal ganglion cells (RGC) to optical or electrical white noise stimulation in healthy and degenerate rat retinas were recorded via MEA. Each RGC was fit with linear-nonlinear (LN) and convolutional neural network (CNN) models. To characterize RGC noise, we compared statistics of spike-triggered averages (STA) in RGCs responding to electrical or visual stimulation of healthy and degenerate retinas. At the population level, we constructed a linear decoder to determine the accuracy of the ensemble of RGCs on N-way discrimination tasks. Main results. Although computational models can match Natural visual responses well (correlation ~0.6), they fit significantly worse to spike timings elicited by electrical stimulation of the healthy retina (correlation ~0.15). In the degenerate retina, response to electrical stimulation is equally bad. The signal-to-noise ratio of electrical STAs in degenerate retinas matched that of the Natural responses when 78±6.5% of the spikes were replaced with random timing. However, the noise in RGC responses contributed minimally to errors in ensemble decoding. The determining factor in accuracy of decoding was the number of responding cells. To compensate for fewer responding cells under electrical stimulation than in Natural Vision, more presentations of the same stimulus are required to deliver sufficient information for image decoding. Significance. Slower-than-Natural pattern identification by patients with the PRIMA implant may be explained by the lower number of electrically activated cells than in Natural Vision, which is compensated by a larger number of the stimulus presentations.

  • decoding network mediated retinal response to electrical stimulation implications for fidelity of prosthetic Vision
    bioRxiv, 2020
    Co-Authors: Alexander Shmakov, Daniel Palanker
    Abstract:

    Objective: Patients with the photovoltaic subretinal implant PRIMA demonstrated letter acuity by ~0.1 logMAR worse than the sampling limit for 100μm pixels (1.3 logMAR) and performed slower than healthy subjects, which exceeded the sampling limit at equivalently pixelated images by ~0.2 logMAR. To explore the underlying differences between the Natural and prosthetic Vision, we compare the fidelity of the retinal response to visual and subretinal electrical stimulation through single-cell modeling and ensemble decoding. Approach: Responses of the retinal ganglion cells (RGC) to optical or electrical (1mm diameter arrays, 75μm pixels) white noise stimulation in healthy and degenerate rat retinas were recorded via MEA. Each RGC was fit with linear-non-linear (LN) and convolutional neural network (CNN) models. To characterize RGC noise level, we compared statistics of the spike-triggered average (STA) in RGCs responding to electrical or visual stimulation of healthy and degenerate retinas. At the population level, we constructed a linear decoder to determine the certainty with which the ensemble of RGCs can support the N-way discrimination tasks. Main results: Although LN and CNN models can match the Natural visual responses pretty well (correlation ~0.6), they fit significantly worse to spike timings elicited by electrical stimulation of the healthy retina (correlation ~0.15). In the degenerate retina, response to electrical stimulation is equally bad. The signal-to-noise ratio of electrical STAs in degenerate retinas matched that of the Natural responses when 78±6.5% of the spikes were replaced with random timing. However, the noise in RGC responses contributed minimally to errors in the ensemble decoding. The determining factor in accuracy of decoding was the number of responding cells. To compensate for fewer responding cells under electrical stimulation than in Natural Vision, larger number of presentations of the same stimulus are required to deliver sufficient information for image decoding. Significance: Slower than Natural pattern identification by patients with the PRIMA implant may be explained by the lower number of electrically activated cells than in Natural Vision, which is compensated by a larger number of the stimulus presentations.

  • cortical interactions between prosthetic and Natural Vision
    Current Biology, 2020
    Co-Authors: Tamar Arensarad, Nairouz Farah, Rivkah Lender, Avital Moshkovitz, Thomas Flores, Daniel Palanker, Yossi Mandel
    Abstract:

    Outer retinal degenerative diseases, such as retinitis pigmentosa (RP) and age-related macular degeneration (AMD), are among the leading causes of incurable blindness in the Western world [1]. Retinal prostheses have been shown to restore some useful Vision by electrically stimulating the remaining retinal neurons [2]. In contrast to inherited retinal degenerative diseases (e.g., RP), typically leading to a complete loss of the visual field, in AMD patients the disease is localized to the macula, leaving the peripheral Vision intact. Implanting a retinal prosthesis in the central macula in AMD patients [3, 4] leads to an intriguing situation where the patient's central retina is stimulated electrically, whereas the peripheral healthy retina responds to Natural light stimulation. An important question is whether the visual cortex responds to these two concurrent stimuli similarly to the interaction between two adjacent Natural light stimuli projected onto healthy retina. Here, we investigated the cortical interactions between prosthetic and Natural Vision based on visually evoked potentials (VEPs) recorded in rats implanted with photovoltaic subretinal implants. Using this model, where prosthetic and Natural Vision information are combined in the visual cortex, we observed striking similarities in the interactions of Natural and prosthetic Vision, including similar effect of background illumination, linear summation of non-patterned stimuli, and lateral inhibition with spatial patterns [5], which increased with target contrast. These results support the idea of combined prosthetic and Natural Vision in restoration of sight for AMD patients.

  • interactions of prosthetic and Natural Vision in animals with local retinal degeneration
    Investigative Ophthalmology & Visual Science, 2015
    Co-Authors: Henri Lorach, Xin Lei, Ludwig Galambos, Theodore I Kamins, Keith Mathieson, Roopa Dalal, Philip Huie, J S Harris, Daniel Palanker
    Abstract:

    Prosthetic restoration of partial sensory loss leads to interactions between artificial and Natural inputs. Ideally, the rehabilitation should allow perceptual fusion of the two modalities. Here we studied the interactions between normal and prosthetic Vision in a rodent model of local retinal degeneration.  Implantation of a photovoltaic array in the subretinal space of normally sighted rats induced local degeneration of the photoreceptors above the chip, and the inner retinal neurons in this area were electrically stimulated by the photovoltaic implant powered by near-infrared (NIR) light. We studied prosthetic and Natural visually evoked potentials (VEP) in response to simultaneous stimulation by NIR and visible light patterns.  We demonstrate that electrical and Natural VEPs summed linearly in the visual cortex, and both responses decreased under brighter ambient light. Responses to visible light flashes increased over 3 orders of magnitude of contrast (flash/background), while for electrical stimulation the contrast range was limited to 1 order of magnitude. The maximum amplitude of the prosthetic VEP was three times lower than the maximum response to a visible flash over the same area on the retina.  Ambient light affects prosthetic responses, albeit much less than responses to visible stimuli. Prosthetic representation of contrast in the visual scene can be encoded, to a limited extent, by the appropriately calibrated stimulus intensity, which also depends on the ambient light conditions. Such calibration will be important for patients combining central prosthetic Vision with Natural peripheral sight, such as in age-related macular degeneration.

Shinji Nishimoto - One of the best experts on this subject based on the ideXlab platform.

  • attention during Natural Vision warps semantic representation across the human brain
    Nature Neuroscience, 2013
    Co-Authors: Tolga Çukur, Shinji Nishimoto, Alexander G Huth, Jack L Gallant
    Abstract:

    The authors use functional magnetic resonance imaging to measure how the semantic representation changes when searching for different object categories in Natural movies. They find tuning shifts that expand the representation of the attended category and of semantically related, but unattended, categories, and compress the representation of categories semantically dissimilar to the target.

  • attention during Natural Vision warps semantic representation across the human brain
    Nature Neuroscience, 2013
    Co-Authors: Tolga Çukur, Shinji Nishimoto, Alexander G Huth, Jack L Gallant
    Abstract:

    Little is known about how attention changes the cortical representation of sensory information in humans. On the basis of neurophysiological evidence, we hypothesized that attention causes tuning changes to expand the representation of attended stimuli at the cost of unattended stimuli. To investigate this issue, we used functional magnetic resonance imaging to measure how semantic representation changed during visual search for different object categories in Natural movies. We found that many voxels across occipito-temporal and fronto-parietal cortex shifted their tuning toward the attended category. These tuning shifts expanded the representation of the attended category and of semantically related, but unattended, categories, and compressed the representation of categories that were semantically dissimilar to the target. Attentional warping of semantic representation occurred even when the attended category was not present in the movie; thus, the effect was not a target-detection artifact. These results suggest that attention dynamically alters visual representation to optimize processing of behaviorally relevant objects during Natural Vision.

Vasishta Pavan - One of the best experts on this subject based on the ideXlab platform.

  • Acquisition et exploitation des connaissances antérieures pour prédire le comportement des piétons autour des véhicules autonomes en environnement urbain
    2019
    Co-Authors: Vasishta Pavan
    Abstract:

    Les véhicules autonomes qui naviguent dans les zones urbaines interagissent avec les piétons et les autres utilisateurs de l'espace partagé, comme les bicyclettes, tout au long de leur trajet, soit dans des zones ouvertes, comme les centres urbains, soit dans des zones fermées, comme les parcs de stationnement. Alors que de plus en plus de véhicules autonomes sillonnent les rues de la ville, leur capacité à comprendre et à prévoir le comportement des piétons devient primordiale. Ceci est possible grâce à l'apprentissage par l'observation continue de la zone à conduire. D'autre part, les conducteurs humains peuvent instinctivement déduire le mouvement des piétons sur une rue urbaine, même dans des zones auparavant invisibles. Ce besoin d'accroître la conscience de la situation d'un véhicule pour atteindre la parité avec les conducteurs humains alimente le besoin de données plus vastes et plus approfondies sur le mouvement des piétons dans une myriade de situations et d'environnements variés.Cette thèse porte sur le problème de la réduction de cette dépendance à l'égard de grandes quantités de données pour prédire avec précision les mouvements des piétons sur un horizon prolongé. Ce travail s'appuie plutôt sur la connaissance préalable, elle-même dérivée des principes sociologiques de "Vision naturelle" et de "Mouvement naturel" du JJ Gibson. Il suppose que le comportement des piétons est fonction de l'environnement bâti et que tous les mouvements sont orientés vers l'atteinte d'un but. Connaissant ce principe sous-jacent, le coût de la traversée d'une scène du point de vue d'un piéton peut être deviné. Sachant cela, on peut en déduire leur comportement. Cet ouvrage apporte une contribution au cadre de compréhension du comportement piétonnier en tant que confluent de modèles graphiques probabilistes et de principes sociologiques de trois façons : modélisation de l'environnement, apprentissage et préVision.En ce qui concerne la modélisation, le travail suppose que certaines parties de la scène observée sont plus attrayantes pour les piétons et que d'autres sont répugnantes. En quantifiant ces " affordances " en fonction de certains Points d'Intérêt (POI) et des différents éléments de la scène, il est possible de modéliser cette scène sous observation avec différents coûts comme base des caractéristiques qu'elle contient.En ce qui concerne l'apprentissage, ce travail étend principalement la méthode du Modèle de Markov Caché Croissant (GHMM) - une variante du modèle probabiliste du Modèle de Markov Caché (HMM) - avec l'application des connaissances préalables pour initialiser une topologie capable de déduire avec précision les " mouvements types " dans la scène. Deuxièmement, le modèle généré se comporte comme une carte auto-organisatrice, apprenant progressivement un comportement piétonnier atypique et le codant dans la topologie tout en mettant à jour les paramètres du HMM sous-jacent.Sur la prédiction, ce travail effectue une inférence bayésienne sur le modèle généré et peut, grâce aux connaissances préalables, réussir à mieux prédire les positions futures des piétons sans disposer de trajectoires de formation, ce qui permet de l'utiliser dans un environnement urbain avec uniquement des données environnementales, que la méthode GHMM actuellement en application.Les contributions de cette thèse sont validées par des résultats expérimentaux sur des données réelles capturées à partir d'une caméra aérienne surplombant une rue urbaine très fréquentée, représentant un environnement bâti structuré et du point de vue de la voiture dans un parking, représentant un environnement semi-structuré et testé sur des trajectoires typiques et atypiques dans chaque cas.Autonomous Vehicles navigating in urban areas interact with pedestrians and other shared space users like bicycles throughout their journey either in open areas, like urban city centers, or closed areas, like parking lots. As more and more autonomous vehicles take to the city streets, their ability to understand and predict pedestrian behaviour becomes paramount. This is achieved by learning through continuous observation of the area to drive in. On the other hand, human drivers can instinctively infer pedestrian motion on an urban street even in previously unseen areas. This need for increasing a vehicle's situational awareness to reach parity with human drivers fuels the need for larger and deeper data on pedestrian motion in myriad situations and varying environments.This thesis focuses on the problem of reducing this dependency on large amounts of data to predict pedestrian motion accurately over an extended horizon. Instead, this work relies on Prior Knowledge, itself derived from the JJ Gibson's sociological principles of ``Natural Vision'' and ``Natural Movement''. It assumes that pedestrian behaviour is a function of the built environment and that all motion is directed towards reaching a goal. Knowing this underlying principle, the cost for traversing a scene from a pedestrian's perspective can be divined. Knowing this, inference on their behaviour can be performed. This work presents a contribution to the framework of understanding pedestrian behaviour as a confluence of probabilistic graphical models and sociological principles in three ways: modelling the environment, learning and predicting.Concerning modelling, the work assumes that there are some parts of the observed scene which are more attractive to pedestrians and some areas, repulsive. By quantifying these ``affordances'' as a consequence of certain Points of Interest (POIs) and the different elements in the scene, it is possible to model this scene under observation with different costs as a basis of the features contained within.Concerning learning, this work primarily extends the Growing Hidden Markov Model (GHMM) method - a variant of the Hidden Markov Model (HMM) probabilistic model- with the application of Prior Knowledge to initialise a topology able to infer accurately on ``typical motions'' in the scene. Secondly, the model that is generated behaves as a Self-Organising map, incrementally learning non-typical pedestrian behaviour and encoding this within the topology while updating the parameters of the underlying HMM.On prediction, this work carries out Bayesian inference on the generated model and can, as a result of Prior Knowledge, manage to perform better than the existing implementation of the GHMM method in predicting future pedestrian positions without the availability of training trajectories, thereby allowing for its utilisation in an urban scene with only environmental data.The contributions of this thesis are validated through experimental results on real data captured from an overhead camera overlooking a busy urban street, depicting a structured built environment and from the car's perspective in a parking lot, depicting a semi-structured environment and tested on typical and non-typical trajectories in each case

  • Acquisition et exploitation des connaissances antérieures pour prédire le comportement des piétons autour des véhicules autonomes en environnement urbain
    HAL CCSD, 2019
    Co-Authors: Vasishta Pavan
    Abstract:

    Autonomous Vehicles navigating in urban areas interact with pedestrians and other shared space users like bicycles throughout their journey either in open areas, like urban city centers, or closed areas, like parking lots. As more and more autonomous vehicles take to the city streets, their ability to understand and predict pedestrian behaviour becomes paramount. This is achieved by learning through continuous observation of the area to drive in. On the other hand, human drivers can instinctively infer pedestrian motion on an urban street even in previously unseen areas. This need for increasing a vehicle's situational awareness to reach parity with human drivers fuels the need for larger and deeper data on pedestrian motion in myriad situations and varying environments.This thesis focuses on the problem of reducing this dependency on large amounts of data to predict pedestrian motion accurately over an extended horizon. Instead, this work relies on Prior Knowledge, itself derived from the JJ Gibson's sociological principles of "Natural Vision'' and "Natural Movement''. It assumes that pedestrian behaviour is a function of the built environment and that all motion is directed towards reaching a goal. Knowing this underlying principle, the cost for traversing a scene from a pedestrian's perspective can be divined. As a result, inference on their behaviour can be performed. This work presents a contribution to the framework of understanding pedestrian behaviour as a confluence of probabilistic graphical models and sociological principles in three ways: modelling the environment, learning and predicting. Concerning modelling, the work assumes that there are some parts of the observed scene which are more attractive to pedestrians and some areas, repulsive. By quantifying these "affordances'' as a consequence of certain Points of Interest (POIs) and the different elements in the scene, it is possible to model this scene under observation with different costs as a basis of the features contained within. Concerning learning, this work primarily extends the Growing Hidden Markov Model (GHMM) method - a variant of the Hidden Markov Model (HMM) probabilistic model- with the application of Prior Knowledge to initialise a topology able to infer accurately on "typical motions'' in the scene. Also, the model that is generated behaves as a Self-Organising map, incrementally learning non-typical pedestrian behaviour and encoding this within the topology while updating the parameters of the underlying HMM. On prediction, this work carries out Bayesian inference on the generated model and can, as a result of Prior Knowledge, manage to perform better than the existing implementation of the GHMM method in predicting future pedestrian positions without the availability of training trajectories, thereby allowing for its utilisation in an urban scene with only environmental data. The contributions of this thesis are validated through experimental results on real data captured from an overhead camera overlooking a busy urban street, depicting a structured built environment and from the car's perspective in a parking lot, depicting a semi-structured environment and tested on typical and non-typical trajectories in each case.Les véhicules autonomes qui naviguent dans les zones urbaines interagissent avec les piétons et les autres utilisateurs de l'espace partagé, comme les bicyclettes, tout au long de leur trajet, soit dans des zones ouvertes, comme les centres urbains, soit dans des zones fermées, comme les parcs de stationnement. Alors que de plus en plus de véhicules autonomes sillonnent les rues de la ville, leur capacité à comprendre et à prévoir le comportement des piétons devient primordiale. Ceci est possible grâce à l'apprentissage par l'observation continue de la zone à conduire. D'autre part, les conducteurs humains peuvent instinctivement déduire le mouvement des piétons sur une rue urbaine, même dans des zones auparavant invisibles. Ce besoin d'accroître la conscience de la situation d'un véhicule pour atteindre la parité avec les conducteurs humains alimente le besoin de données plus vastes et plus approfondies sur le mouvement des piétons dans une myriade de situations et d'environnements variés.Cette thèse porte sur le problème de la réduction de cette dépendance à l'égard de grandes quantités de données pour prédire avec précision les mouvements des piétons sur un horizon prolongé. Au lieu de cela, ce travail s'appuie sur la connaissance préalable, elle-même dérivée des principes sociologiques de JJ Gibson de la "Vision naturelle" et du "mouvement naturel". Il suppose que le comportement des piétons est fonction de l'environnement bâti et que tous les mouvements sont orientés vers l'atteinte d'un but. Connaissant ce principe sous-jacent, le coût de la traversée d'une scène du point de vue d'un piéton peut être deviné. Par conséquent, il est possible de tirer des conclusions sur leur comportement. Cet ouvrage apporte une contribution au cadre de compréhension du comportement piétonnier en tant que confluent de modèles graphiques probabilistes et de principes sociologiques de trois façons : modélisation de l'environnement, apprentissage et préVision. En ce qui concerne la modélisation, le travail suppose que certaines parties de la scène observée sont plus attrayantes pour les piétons et que d'autres sont répugnantes. En quantifiant ces "affordances" en fonction de certains Points d'Intérêt (POI) et des différents éléments de la scène, il est possible de modéliser cette scène sous observation avec différents coûts comme base des caractéristiques qu'elle contient. En ce qui concerne l'apprentissage, ce travail étend principalement la méthode GHMMM (Growing Hidden Markov Model) - une variante du modèle probabiliste HMMM (Hidden Markov Model) - avec l'application des connaissances préalables pour initialiser une topologie capable de déduire précisément les "mouvements types" de la scène. De plus, le modèle généré se comporte comme une carte auto-organisatrice, apprenant progressivement un comportement piétonnier atypique et le codant dans la topologie tout en mettant à jour les paramètres du HMM sous-jacent. Sur la prédiction, ce travail effectue une inférence bayésienne sur le modèle généré et peut, grâce aux connaissances préalables, réussir à mieux prédire les positions futures des piétons sans disposer de trajectoires de formation, ce qui permet de l'utiliser dans un environnement urbain avec uniquement des données environnementales, que la méthode GHMM actuellement en application. Les contributions de cette thèse sont validées par des résultats expérimentaux sur des données réelles capturées à partir d'une caméra aérienne surplombant une rue urbaine très fréquentée, représentant un environnement bâti structuré et du point de vue de la voiture dans un parking, représentant un environnement semi-structuré et testé sur des trajectoires typiques et atypiques dans chaque cas

Bharat B Biswal - One of the best experts on this subject based on the ideXlab platform.

  • intersubject consistent dynamic connectivity during Natural Vision revealed by functional mri
    NeuroImage, 2020
    Co-Authors: Bharat B Biswal
    Abstract:

    The functional communications between brain regions are thought to be dynamic. However, it is usually difficult to elucidate whether the observed dynamic connectivity is functionally meaningful or simply due to noise during unconstrained task conditions such as resting-state. During Naturalistic conditions, such as watching a movie, it has been shown that local brain activities, e.g. in the visual cortex, are consistent across subjects. Following similar logic, we propose to study intersubject correlations of the time courses of dynamic connectivity during Naturalistic conditions to extract functionally meaningful dynamic connectivity patterns. We analyzed a functional MRI (fMRI) dataset when the subjects watched a short animated movie. We calculated dynamic connectivity by using sliding window technique, and quantified the intersubject correlations of the time courses of dynamic connectivity. Although the time courses of dynamic connectivity are thought to be noisier than the original signals, we found similar level of intersubject correlations of dynamic connectivity to those of regional activity. Most importantly, highly consistent dynamic connectivity could occur between regions that did not show high intersubject correlations of regional activity, and between regions with little stable functional connectivity. The analysis highlighted higher order brain regions such as the default mode network that dynamically interacted with posterior visual regions during the movie watching, which may be associated with the understanding of the movie.

  • intersubject consistent dynamic connectivity during Natural Vision revealed by functional mri
    bioRxiv, 2019
    Co-Authors: Bharat B Biswal
    Abstract:

    Abstract The functional communications between brain regions are thought to be dynamic. However, it is usually difficult to elucidate whether the observed dynamic connectivity is functionally meaningful or simply due to noise during unconstrained task conditions such as resting-state. During Naturalistic conditions, such as watching a movie, it has been shown that brain activities in the same region, e.g. visual cortex, are consistent across subjects. Following similar logic, we proposed to study intersubject correlations of the time courses of dynamic connectivity during Naturalistic conditions to extract functionally meaningful dynamic connectivity patterns. We analyzed a functional MRI (fMRI) dataset when the subjects watched a short animated movie. We calculated dynamic connectivity by using sliding window technique, and further quantified the intersubject correlations of the time courses of dynamic connectivity. Although the time courses of dynamic connectivity are thought to be noisier than the original signals, we found similar level of intersubject correlations of dynamic connectivity. Most importantly, highly consistent dynamic connectivity could occur between regions that did not show intersubject correlations of regional activity, and between regions with little stable functional connectivity. The analysis highlighted higher order brain regions such as the lateral prefrontal cortex and the default mode network that dynamically interact with posterior visual regions during the movie watching, which may be associated with the understanding of the movie. Highlights Intersubject shared time courses may provide a complementary approach to study dynamic connectivity Widespread regions showed highly shared dynamic connectivity during movie watching, while these regions themselves did not show shared regional activity Shared dynamic connectivity often occurred between regions from different functional systems