Visual Modality

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 35922 Experts worldwide ranked by ideXlab platform

István Czigler - One of the best experts on this subject based on the ideXlab platform.

  • Visual Mismatch Negativity (vMMN): a Prediction Error Signal in the Visual Modality - Visual mismatch negativity (vMMN): a prediction error signal in the Visual Modality.
    Frontiers in human neuroscience, 2015
    Co-Authors: Gábor Stefanics, Piia Astikainen, István Czigler
    Abstract:

    Our Visual field contains much more information at everymoment than we can attend and consciously process. How isthe multitude of unattended events processed in the brain andselected for the further attentive evaluation? Current theories ofVisual change detection emphasize the importance of consciousattention to detect changes in the Visual environment. However,an increasing body of studies shows that the human brain iscapable of detecting even small Visual changes if such changesviolate non-conscious probabilistic expectations based on priorexperiences. In other words, our brain automatically representsenvironmental statistical regularities.Since the discovery of the auditory mismatch negativity(MMN) event-related potential (ERP) component, the majorityof research in the field has focused on auditory deviance detec-tion.SuchautomaticchangedetectionmechanismsoperateintheVisual Modality too, as indicated by the Visual mismatch negativ-ity (vMMN) brain potential to rare changes. vMMN is typicallyelicited by stimuli with infrequent (deviant) features embeddedin a stream of frequent (standard) stimuli, outside the focus ofattention. Information about both simple and more complexcharacteristics of stimuli is rapidly processed and stored by thebrain in the absence of conscious attention.InthisresearchtopicweaimtopresentvMMNasapredictionerror signal and put it in context of the hierarchical predictivecoding framework. Predictive coding theories account for phe-nomena such as MMN and repetition suppression, and placetheminabroadercontextofageneraltheoryofcorticalresponses(Friston,2005,2010).EachpaperinthisResearchTopicisavalu-able contribution to the field of automatic Visual change detec-tion and deepens our understanding of the short term plasticityunderlying predictive processes of Visual perceptual learning.A wide range of vMMN studies has been presented in sev-enteen articles in this Research Topic. Twelve articles addressroughly four general sub-themes including attention, language,faceprocessing,andpsychiatricdisorders.Additionally,fourarti-cles focused on particular subjects such as the oblique effect,object formation, and development and time-frequency analysisof vMMN. Furthermore, a review paper presented vMMN in ahierarchical predictive coding framework.Four articles investigated the relationship between attentionand vMMN. Kremlaˇcek et al. (2013) presented subjects withradial motion stimuli in the periphery of the Visual field usingan oddball paradigm and manipulated the attentional load byvarying the difficulty of a central distractor tasks. They aimedto manipulate the amount of available attentional resources thatmight have been involuntarily captured by the vMMN-evokingstimulipresentedintheperipheryoutsideoftheattentionalfocus.The distractor task had three difficulty levels: (1) a central fix-ation (easy), and a target number detection task with (2) onetarget number (moderate), and (3) three target numbers (diffi-cult). Analysis of deviant minus standard differential waveformsrevealed a significant posterior negativity in the ∼140–200msinterval,whichwasunaffectedbythedifficultyofthecentraltask,indicatingthattheautomaticprocessesunderlyingregistrationofchanges in motion are independent of attentional resources usedto detect target numbers.Kimura and Takeda (2013) investigated whether characteris-tics of vMMN depended on the difficulty of an attended primarytask, i.e., they tested the level of automaticity of the vMMN. Taskdifficulty was manipulated as the magnitude of change of a cir-cle at fixation, and vMMN was elicited by deviant orientationof bar patterns. An equal probability control condition was alsoused. The difference potential between the deviant-related ERPand the ERP elicited by identical orientation pattern in the con-trol condition appeared to be influenced by the difficulty of theattentive task. As a function of task difficulty, the latency of thedifference potential (i.e., the vMMN) increased, indicating thatprocesses underlying vMMN to orientation changes are not fullyindependent of the attention demands of the ongoing tasks.Kuldkepp et al. (2013) used rare changes in direction ofperipheral motion to evoke vMMN applying a novel continuouswhole-display stimulus configuration. The demanding distractortask involved motion onset detection and was presented in thecenter of the Visual field. The level of attention to the vMMN-evoking stimuli was varied by manipulating their task-relevanceusing “Ignore” and “Attend” conditions. Deviant minus standardwaveforms in the “Ignore” condition showed significant vMMNin the 100–200, 250–300, and 235–375ms intervals, whereas in

  • Visual mismatch negativity vmmn a prediction error signal in the Visual Modality
    Frontiers in Human Neuroscience, 2015
    Co-Authors: Gábor Stefanics, Piia Astikainen, István Czigler
    Abstract:

    Our Visual field contains much more information at everymoment than we can attend and consciously process. How isthe multitude of unattended events processed in the brain andselected for the further attentive evaluation? Current theories ofVisual change detection emphasize the importance of consciousattention to detect changes in the Visual environment. However,an increasing body of studies shows that the human brain iscapable of detecting even small Visual changes if such changesviolate non-conscious probabilistic expectations based on priorexperiences. In other words, our brain automatically representsenvironmental statistical regularities.Since the discovery of the auditory mismatch negativity(MMN) event-related potential (ERP) component, the majorityof research in the field has focused on auditory deviance detec-tion.SuchautomaticchangedetectionmechanismsoperateintheVisual Modality too, as indicated by the Visual mismatch negativ-ity (vMMN) brain potential to rare changes. vMMN is typicallyelicited by stimuli with infrequent (deviant) features embeddedin a stream of frequent (standard) stimuli, outside the focus ofattention. Information about both simple and more complexcharacteristics of stimuli is rapidly processed and stored by thebrain in the absence of conscious attention.InthisresearchtopicweaimtopresentvMMNasapredictionerror signal and put it in context of the hierarchical predictivecoding framework. Predictive coding theories account for phe-nomena such as MMN and repetition suppression, and placetheminabroadercontextofageneraltheoryofcorticalresponses(Friston,2005,2010).EachpaperinthisResearchTopicisavalu-able contribution to the field of automatic Visual change detec-tion and deepens our understanding of the short term plasticityunderlying predictive processes of Visual perceptual learning.A wide range of vMMN studies has been presented in sev-enteen articles in this Research Topic. Twelve articles addressroughly four general sub-themes including attention, language,faceprocessing,andpsychiatricdisorders.Additionally,fourarti-cles focused on particular subjects such as the oblique effect,object formation, and development and time-frequency analysisof vMMN. Furthermore, a review paper presented vMMN in ahierarchical predictive coding framework.Four articles investigated the relationship between attentionand vMMN. Kremlaˇcek et al. (2013) presented subjects withradial motion stimuli in the periphery of the Visual field usingan oddball paradigm and manipulated the attentional load byvarying the difficulty of a central distractor tasks. They aimedto manipulate the amount of available attentional resources thatmight have been involuntarily captured by the vMMN-evokingstimulipresentedintheperipheryoutsideoftheattentionalfocus.The distractor task had three difficulty levels: (1) a central fix-ation (easy), and a target number detection task with (2) onetarget number (moderate), and (3) three target numbers (diffi-cult). Analysis of deviant minus standard differential waveformsrevealed a significant posterior negativity in the ∼140–200msinterval,whichwasunaffectedbythedifficultyofthecentraltask,indicatingthattheautomaticprocessesunderlyingregistrationofchanges in motion are independent of attentional resources usedto detect target numbers.Kimura and Takeda (2013) investigated whether characteris-tics of vMMN depended on the difficulty of an attended primarytask, i.e., they tested the level of automaticity of the vMMN. Taskdifficulty was manipulated as the magnitude of change of a cir-cle at fixation, and vMMN was elicited by deviant orientationof bar patterns. An equal probability control condition was alsoused. The difference potential between the deviant-related ERPand the ERP elicited by identical orientation pattern in the con-trol condition appeared to be influenced by the difficulty of theattentive task. As a function of task difficulty, the latency of thedifference potential (i.e., the vMMN) increased, indicating thatprocesses underlying vMMN to orientation changes are not fullyindependent of the attention demands of the ongoing tasks.Kuldkepp et al. (2013) used rare changes in direction ofperipheral motion to evoke vMMN applying a novel continuouswhole-display stimulus configuration. The demanding distractortask involved motion onset detection and was presented in thecenter of the Visual field. The level of attention to the vMMN-evoking stimuli was varied by manipulating their task-relevanceusing “Ignore” and “Attend” conditions. Deviant minus standardwaveforms in the “Ignore” condition showed significant vMMNin the 100–200, 250–300, and 235–375ms intervals, whereas in

Gábor Stefanics - One of the best experts on this subject based on the ideXlab platform.

  • Visual Mismatch Negativity (vMMN): a Prediction Error Signal in the Visual Modality - Visual mismatch negativity (vMMN): a prediction error signal in the Visual Modality.
    Frontiers in human neuroscience, 2015
    Co-Authors: Gábor Stefanics, Piia Astikainen, István Czigler
    Abstract:

    Our Visual field contains much more information at everymoment than we can attend and consciously process. How isthe multitude of unattended events processed in the brain andselected for the further attentive evaluation? Current theories ofVisual change detection emphasize the importance of consciousattention to detect changes in the Visual environment. However,an increasing body of studies shows that the human brain iscapable of detecting even small Visual changes if such changesviolate non-conscious probabilistic expectations based on priorexperiences. In other words, our brain automatically representsenvironmental statistical regularities.Since the discovery of the auditory mismatch negativity(MMN) event-related potential (ERP) component, the majorityof research in the field has focused on auditory deviance detec-tion.SuchautomaticchangedetectionmechanismsoperateintheVisual Modality too, as indicated by the Visual mismatch negativ-ity (vMMN) brain potential to rare changes. vMMN is typicallyelicited by stimuli with infrequent (deviant) features embeddedin a stream of frequent (standard) stimuli, outside the focus ofattention. Information about both simple and more complexcharacteristics of stimuli is rapidly processed and stored by thebrain in the absence of conscious attention.InthisresearchtopicweaimtopresentvMMNasapredictionerror signal and put it in context of the hierarchical predictivecoding framework. Predictive coding theories account for phe-nomena such as MMN and repetition suppression, and placetheminabroadercontextofageneraltheoryofcorticalresponses(Friston,2005,2010).EachpaperinthisResearchTopicisavalu-able contribution to the field of automatic Visual change detec-tion and deepens our understanding of the short term plasticityunderlying predictive processes of Visual perceptual learning.A wide range of vMMN studies has been presented in sev-enteen articles in this Research Topic. Twelve articles addressroughly four general sub-themes including attention, language,faceprocessing,andpsychiatricdisorders.Additionally,fourarti-cles focused on particular subjects such as the oblique effect,object formation, and development and time-frequency analysisof vMMN. Furthermore, a review paper presented vMMN in ahierarchical predictive coding framework.Four articles investigated the relationship between attentionand vMMN. Kremlaˇcek et al. (2013) presented subjects withradial motion stimuli in the periphery of the Visual field usingan oddball paradigm and manipulated the attentional load byvarying the difficulty of a central distractor tasks. They aimedto manipulate the amount of available attentional resources thatmight have been involuntarily captured by the vMMN-evokingstimulipresentedintheperipheryoutsideoftheattentionalfocus.The distractor task had three difficulty levels: (1) a central fix-ation (easy), and a target number detection task with (2) onetarget number (moderate), and (3) three target numbers (diffi-cult). Analysis of deviant minus standard differential waveformsrevealed a significant posterior negativity in the ∼140–200msinterval,whichwasunaffectedbythedifficultyofthecentraltask,indicatingthattheautomaticprocessesunderlyingregistrationofchanges in motion are independent of attentional resources usedto detect target numbers.Kimura and Takeda (2013) investigated whether characteris-tics of vMMN depended on the difficulty of an attended primarytask, i.e., they tested the level of automaticity of the vMMN. Taskdifficulty was manipulated as the magnitude of change of a cir-cle at fixation, and vMMN was elicited by deviant orientationof bar patterns. An equal probability control condition was alsoused. The difference potential between the deviant-related ERPand the ERP elicited by identical orientation pattern in the con-trol condition appeared to be influenced by the difficulty of theattentive task. As a function of task difficulty, the latency of thedifference potential (i.e., the vMMN) increased, indicating thatprocesses underlying vMMN to orientation changes are not fullyindependent of the attention demands of the ongoing tasks.Kuldkepp et al. (2013) used rare changes in direction ofperipheral motion to evoke vMMN applying a novel continuouswhole-display stimulus configuration. The demanding distractortask involved motion onset detection and was presented in thecenter of the Visual field. The level of attention to the vMMN-evoking stimuli was varied by manipulating their task-relevanceusing “Ignore” and “Attend” conditions. Deviant minus standardwaveforms in the “Ignore” condition showed significant vMMNin the 100–200, 250–300, and 235–375ms intervals, whereas in

  • Visual mismatch negativity vmmn a prediction error signal in the Visual Modality
    Frontiers in Human Neuroscience, 2015
    Co-Authors: Gábor Stefanics, Piia Astikainen, István Czigler
    Abstract:

    Our Visual field contains much more information at everymoment than we can attend and consciously process. How isthe multitude of unattended events processed in the brain andselected for the further attentive evaluation? Current theories ofVisual change detection emphasize the importance of consciousattention to detect changes in the Visual environment. However,an increasing body of studies shows that the human brain iscapable of detecting even small Visual changes if such changesviolate non-conscious probabilistic expectations based on priorexperiences. In other words, our brain automatically representsenvironmental statistical regularities.Since the discovery of the auditory mismatch negativity(MMN) event-related potential (ERP) component, the majorityof research in the field has focused on auditory deviance detec-tion.SuchautomaticchangedetectionmechanismsoperateintheVisual Modality too, as indicated by the Visual mismatch negativ-ity (vMMN) brain potential to rare changes. vMMN is typicallyelicited by stimuli with infrequent (deviant) features embeddedin a stream of frequent (standard) stimuli, outside the focus ofattention. Information about both simple and more complexcharacteristics of stimuli is rapidly processed and stored by thebrain in the absence of conscious attention.InthisresearchtopicweaimtopresentvMMNasapredictionerror signal and put it in context of the hierarchical predictivecoding framework. Predictive coding theories account for phe-nomena such as MMN and repetition suppression, and placetheminabroadercontextofageneraltheoryofcorticalresponses(Friston,2005,2010).EachpaperinthisResearchTopicisavalu-able contribution to the field of automatic Visual change detec-tion and deepens our understanding of the short term plasticityunderlying predictive processes of Visual perceptual learning.A wide range of vMMN studies has been presented in sev-enteen articles in this Research Topic. Twelve articles addressroughly four general sub-themes including attention, language,faceprocessing,andpsychiatricdisorders.Additionally,fourarti-cles focused on particular subjects such as the oblique effect,object formation, and development and time-frequency analysisof vMMN. Furthermore, a review paper presented vMMN in ahierarchical predictive coding framework.Four articles investigated the relationship between attentionand vMMN. Kremlaˇcek et al. (2013) presented subjects withradial motion stimuli in the periphery of the Visual field usingan oddball paradigm and manipulated the attentional load byvarying the difficulty of a central distractor tasks. They aimedto manipulate the amount of available attentional resources thatmight have been involuntarily captured by the vMMN-evokingstimulipresentedintheperipheryoutsideoftheattentionalfocus.The distractor task had three difficulty levels: (1) a central fix-ation (easy), and a target number detection task with (2) onetarget number (moderate), and (3) three target numbers (diffi-cult). Analysis of deviant minus standard differential waveformsrevealed a significant posterior negativity in the ∼140–200msinterval,whichwasunaffectedbythedifficultyofthecentraltask,indicatingthattheautomaticprocessesunderlyingregistrationofchanges in motion are independent of attentional resources usedto detect target numbers.Kimura and Takeda (2013) investigated whether characteris-tics of vMMN depended on the difficulty of an attended primarytask, i.e., they tested the level of automaticity of the vMMN. Taskdifficulty was manipulated as the magnitude of change of a cir-cle at fixation, and vMMN was elicited by deviant orientationof bar patterns. An equal probability control condition was alsoused. The difference potential between the deviant-related ERPand the ERP elicited by identical orientation pattern in the con-trol condition appeared to be influenced by the difficulty of theattentive task. As a function of task difficulty, the latency of thedifference potential (i.e., the vMMN) increased, indicating thatprocesses underlying vMMN to orientation changes are not fullyindependent of the attention demands of the ongoing tasks.Kuldkepp et al. (2013) used rare changes in direction ofperipheral motion to evoke vMMN applying a novel continuouswhole-display stimulus configuration. The demanding distractortask involved motion onset detection and was presented in thecenter of the Visual field. The level of attention to the vMMN-evoking stimuli was varied by manipulating their task-relevanceusing “Ignore” and “Attend” conditions. Deviant minus standardwaveforms in the “Ignore” condition showed significant vMMNin the 100–200, 250–300, and 235–375ms intervals, whereas in

Beauchaud Marilyn - One of the best experts on this subject based on the ideXlab platform.

  • Appraisal of unimodal cues during agonistic interactions in Maylandia zebra.
    'PeerJ', 2017
    Co-Authors: Chabrolles Laura, Ben Ammar Imen, Fernandez, Marie S. A., Boyer Nicolas, Attia Joël, Fonseca, Paulo J, Amorim, Clara M P, Beauchaud Marilyn
    Abstract:

    International audienceCommunication is essential during social interactions including animal conflicts and it is often a complex process involving multiple sensory channels or modalities. To better understand how different modalities interact during communication, it is fundamental to study the behavioural responses to both the composite multimodal signal and each unimodal component with adequate experimental protocols. Here we test how an African cichlid, which communicates with multiple senses, responds to different sensory stimuli in a social relevant scenario. We tested Maylandia zebra males with isolated chemical (urine or holding water coming both from dominant males), Visual (real opponent or video playback) and acoustic (agonistic sounds) cues during agonistic interactions. We showed that (1) these fish relied mostly on the Visual Modality, showing increased aggressiveness in response to the sight of a real contestant but no responses to urine or agonistic sounds presented separately, (2) video playback in our study did not appear appropriate to test the Visual Modality and needs more technical prospecting, (3) holding water provoked territorial behaviours and seems to be promising for the investigation into the role of the chemical channel in this species. Our findings suggest that unimodal signals are non-redundant but how different sensory modalities interplay during communication remains largely unknown in fish

  • Appraisal of unimodal cues during agonistic interactions in Maylandia zebra
    'PeerJ', 2017
    Co-Authors: Chabrolle Laura, Boyer Nicolas, Attia Joël, Ammar, Imen Ben, Fernandez, Marie S.a., Fonseca, Paulo João, Amorim, Maria Clara Pessoa, Beauchaud Marilyn
    Abstract:

    Encontra-se informação suplementar disponível em: http://dx.doi.org/10.7717/ peerj.3643#supplemental-informationCommunication is essential during social interactions including animal conflicts and it is often a complex process involving multiple sensory channels or modalities. To better understand how different modalities interact during communication, it is fundamental to study the behavioural responses to both the composite multimodal signal and each unimodal component with adequate experimental protocols. Here we test how an African cichlid, which communicates with multiple senses, responds to different sensory stimuli in a social relevant scenario. We tested Maylandia zebra males with isolated chemical (urine or holding water coming both from dominant males), Visual (real opponent or video playback) and acoustic (agonistic sounds) cues during agonistic interactions.Weshowed that (1) these fish relied mostly on the Visual Modality, showing increased aggressiveness in response to the sight of a real contestant but no responses to urine or agonistic sounds presented separately, (2) video playback in our study did not appear appropriate to test the Visual Modality and needs more technical prospecting, (3) holding water provoked territorial behaviours and seems to be promising for the investigation into the role of the chemical channel in this species. Our findings suggest that unimodal signals are non-redundant but how different sensory modalities interplay during communication remains largely unknown in fish.Université de Lyon/Saint-Etienne and the Centre National de la Recherche Scientifique (CNRS); Ministère de la Rechercheinfo:eu-repo/semantics/publishedVersio

Piia Astikainen - One of the best experts on this subject based on the ideXlab platform.

  • Visual Mismatch Negativity (vMMN): a Prediction Error Signal in the Visual Modality - Visual mismatch negativity (vMMN): a prediction error signal in the Visual Modality.
    Frontiers in human neuroscience, 2015
    Co-Authors: Gábor Stefanics, Piia Astikainen, István Czigler
    Abstract:

    Our Visual field contains much more information at everymoment than we can attend and consciously process. How isthe multitude of unattended events processed in the brain andselected for the further attentive evaluation? Current theories ofVisual change detection emphasize the importance of consciousattention to detect changes in the Visual environment. However,an increasing body of studies shows that the human brain iscapable of detecting even small Visual changes if such changesviolate non-conscious probabilistic expectations based on priorexperiences. In other words, our brain automatically representsenvironmental statistical regularities.Since the discovery of the auditory mismatch negativity(MMN) event-related potential (ERP) component, the majorityof research in the field has focused on auditory deviance detec-tion.SuchautomaticchangedetectionmechanismsoperateintheVisual Modality too, as indicated by the Visual mismatch negativ-ity (vMMN) brain potential to rare changes. vMMN is typicallyelicited by stimuli with infrequent (deviant) features embeddedin a stream of frequent (standard) stimuli, outside the focus ofattention. Information about both simple and more complexcharacteristics of stimuli is rapidly processed and stored by thebrain in the absence of conscious attention.InthisresearchtopicweaimtopresentvMMNasapredictionerror signal and put it in context of the hierarchical predictivecoding framework. Predictive coding theories account for phe-nomena such as MMN and repetition suppression, and placetheminabroadercontextofageneraltheoryofcorticalresponses(Friston,2005,2010).EachpaperinthisResearchTopicisavalu-able contribution to the field of automatic Visual change detec-tion and deepens our understanding of the short term plasticityunderlying predictive processes of Visual perceptual learning.A wide range of vMMN studies has been presented in sev-enteen articles in this Research Topic. Twelve articles addressroughly four general sub-themes including attention, language,faceprocessing,andpsychiatricdisorders.Additionally,fourarti-cles focused on particular subjects such as the oblique effect,object formation, and development and time-frequency analysisof vMMN. Furthermore, a review paper presented vMMN in ahierarchical predictive coding framework.Four articles investigated the relationship between attentionand vMMN. Kremlaˇcek et al. (2013) presented subjects withradial motion stimuli in the periphery of the Visual field usingan oddball paradigm and manipulated the attentional load byvarying the difficulty of a central distractor tasks. They aimedto manipulate the amount of available attentional resources thatmight have been involuntarily captured by the vMMN-evokingstimulipresentedintheperipheryoutsideoftheattentionalfocus.The distractor task had three difficulty levels: (1) a central fix-ation (easy), and a target number detection task with (2) onetarget number (moderate), and (3) three target numbers (diffi-cult). Analysis of deviant minus standard differential waveformsrevealed a significant posterior negativity in the ∼140–200msinterval,whichwasunaffectedbythedifficultyofthecentraltask,indicatingthattheautomaticprocessesunderlyingregistrationofchanges in motion are independent of attentional resources usedto detect target numbers.Kimura and Takeda (2013) investigated whether characteris-tics of vMMN depended on the difficulty of an attended primarytask, i.e., they tested the level of automaticity of the vMMN. Taskdifficulty was manipulated as the magnitude of change of a cir-cle at fixation, and vMMN was elicited by deviant orientationof bar patterns. An equal probability control condition was alsoused. The difference potential between the deviant-related ERPand the ERP elicited by identical orientation pattern in the con-trol condition appeared to be influenced by the difficulty of theattentive task. As a function of task difficulty, the latency of thedifference potential (i.e., the vMMN) increased, indicating thatprocesses underlying vMMN to orientation changes are not fullyindependent of the attention demands of the ongoing tasks.Kuldkepp et al. (2013) used rare changes in direction ofperipheral motion to evoke vMMN applying a novel continuouswhole-display stimulus configuration. The demanding distractortask involved motion onset detection and was presented in thecenter of the Visual field. The level of attention to the vMMN-evoking stimuli was varied by manipulating their task-relevanceusing “Ignore” and “Attend” conditions. Deviant minus standardwaveforms in the “Ignore” condition showed significant vMMNin the 100–200, 250–300, and 235–375ms intervals, whereas in

  • Visual mismatch negativity vmmn a prediction error signal in the Visual Modality
    Frontiers in Human Neuroscience, 2015
    Co-Authors: Gábor Stefanics, Piia Astikainen, István Czigler
    Abstract:

    Our Visual field contains much more information at everymoment than we can attend and consciously process. How isthe multitude of unattended events processed in the brain andselected for the further attentive evaluation? Current theories ofVisual change detection emphasize the importance of consciousattention to detect changes in the Visual environment. However,an increasing body of studies shows that the human brain iscapable of detecting even small Visual changes if such changesviolate non-conscious probabilistic expectations based on priorexperiences. In other words, our brain automatically representsenvironmental statistical regularities.Since the discovery of the auditory mismatch negativity(MMN) event-related potential (ERP) component, the majorityof research in the field has focused on auditory deviance detec-tion.SuchautomaticchangedetectionmechanismsoperateintheVisual Modality too, as indicated by the Visual mismatch negativ-ity (vMMN) brain potential to rare changes. vMMN is typicallyelicited by stimuli with infrequent (deviant) features embeddedin a stream of frequent (standard) stimuli, outside the focus ofattention. Information about both simple and more complexcharacteristics of stimuli is rapidly processed and stored by thebrain in the absence of conscious attention.InthisresearchtopicweaimtopresentvMMNasapredictionerror signal and put it in context of the hierarchical predictivecoding framework. Predictive coding theories account for phe-nomena such as MMN and repetition suppression, and placetheminabroadercontextofageneraltheoryofcorticalresponses(Friston,2005,2010).EachpaperinthisResearchTopicisavalu-able contribution to the field of automatic Visual change detec-tion and deepens our understanding of the short term plasticityunderlying predictive processes of Visual perceptual learning.A wide range of vMMN studies has been presented in sev-enteen articles in this Research Topic. Twelve articles addressroughly four general sub-themes including attention, language,faceprocessing,andpsychiatricdisorders.Additionally,fourarti-cles focused on particular subjects such as the oblique effect,object formation, and development and time-frequency analysisof vMMN. Furthermore, a review paper presented vMMN in ahierarchical predictive coding framework.Four articles investigated the relationship between attentionand vMMN. Kremlaˇcek et al. (2013) presented subjects withradial motion stimuli in the periphery of the Visual field usingan oddball paradigm and manipulated the attentional load byvarying the difficulty of a central distractor tasks. They aimedto manipulate the amount of available attentional resources thatmight have been involuntarily captured by the vMMN-evokingstimulipresentedintheperipheryoutsideoftheattentionalfocus.The distractor task had three difficulty levels: (1) a central fix-ation (easy), and a target number detection task with (2) onetarget number (moderate), and (3) three target numbers (diffi-cult). Analysis of deviant minus standard differential waveformsrevealed a significant posterior negativity in the ∼140–200msinterval,whichwasunaffectedbythedifficultyofthecentraltask,indicatingthattheautomaticprocessesunderlyingregistrationofchanges in motion are independent of attentional resources usedto detect target numbers.Kimura and Takeda (2013) investigated whether characteris-tics of vMMN depended on the difficulty of an attended primarytask, i.e., they tested the level of automaticity of the vMMN. Taskdifficulty was manipulated as the magnitude of change of a cir-cle at fixation, and vMMN was elicited by deviant orientationof bar patterns. An equal probability control condition was alsoused. The difference potential between the deviant-related ERPand the ERP elicited by identical orientation pattern in the con-trol condition appeared to be influenced by the difficulty of theattentive task. As a function of task difficulty, the latency of thedifference potential (i.e., the vMMN) increased, indicating thatprocesses underlying vMMN to orientation changes are not fullyindependent of the attention demands of the ongoing tasks.Kuldkepp et al. (2013) used rare changes in direction ofperipheral motion to evoke vMMN applying a novel continuouswhole-display stimulus configuration. The demanding distractortask involved motion onset detection and was presented in thecenter of the Visual field. The level of attention to the vMMN-evoking stimuli was varied by manipulating their task-relevanceusing “Ignore” and “Attend” conditions. Deviant minus standardwaveforms in the “Ignore” condition showed significant vMMNin the 100–200, 250–300, and 235–375ms intervals, whereas in

Gary Morgan - One of the best experts on this subject based on the ideXlab platform.

  • the influence of the Visual Modality on language structure and conventionalization insights from sign language and gesture
    Topics in Cognitive Science, 2015
    Co-Authors: Pamela M Perniss, Asli Ozyurek, Gary Morgan
    Abstract:

    For humans, the ability to communicate and use language is instantiated not only in the vocal Modality but also in the Visual Modality. The main examples of this are sign languages and (co-speech) gestures. Sign languages, the natural languages of Deaf communities, use systematic and conventionalized movements of the hands, face, and body for linguistic expression. Co-speech gestures, though non-linguistic, are produced in tight semantic and temporal integration with speech and constitute an integral part of language together with speech. The articles in this issue explore and document how gestures and sign languages are similar or different and how communicative expression in the Visual Modality can change from being gestural to grammatical in nature through processes of conventionalization. As such, this issue contributes to our understanding of how the Visual Modality shapes language and the emergence of linguistic structure in newly developing systems. Studying the relationship between signs and gestures provides a new window onto the human ability to recruit multiple levels of representation (e.g., categorical, gradient, iconic, abstract) in the service of using or creating conventionalized communicative systems.