Visual Grammar

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 9501 Experts worldwide ranked by ideXlab platform

Gesche Westphalfitch - One of the best experts on this subject based on the ideXlab platform.

  • artificial Grammar learning capabilities in an abstract Visual task match requirements for linguistic syntax
    Frontiers in Psychology, 2018
    Co-Authors: Gesche Westphalfitch, Beatrice Giustolisi, Carlo Cecchetto, Jordan S Martin, Tecumseh W Fitch
    Abstract:

    Whether pattern-parsing mechanisms are specific to language or apply across multiple cognitive domains remains unresolved. Formal language theory provides a mathematical framework for classifying pattern-generating rule sets (or "Grammars") according to complexity. This framework applies to patterns at any level of complexity, stretching from simple sequences, to highly complex tree-like or net-like structures, to any Turing-computable set of strings. Here, we explored human pattern-processing capabilities in the Visual domain by generating abstract Visual sequences made up of abstract tiles differing in form and color. We constructed different sets of sequences, using artificial "Grammars" (rule sets) at three key complexity levels. Because human linguistic syntax is classed as "mildly context-sensitive," we specifically included a Visual Grammar at this complexity level. Acquisition of these three Grammars was tested in an artificial Grammar-learning paradigm: after exposure to a set of well-formed strings, participants were asked to discriminate novel grammatical patterns from non-grammatical patterns. Participants successfully acquired all three Grammars after only minutes of exposure, correctly generalizing to novel stimuli and to novel stimulus lengths. A Bayesian analysis excluded multiple alternative hypotheses and shows that the success in rule acquisition applies both at the group level and for most participants analyzed individually. These experimental results demonstrate rapid pattern learning for abstract Visual patterns, extending to the mildly context-sensitive level characterizing language. We suggest that a formal equivalence of processing at the mildly context sensitive level in the Visual and linguistic domains implies that cognitive mechanisms with the computational power to process linguistic syntax are not specific to the domain of language, but extend to abstract Visual patterns with no meaning.

Tecumseh W Fitch - One of the best experts on this subject based on the ideXlab platform.

  • artificial Grammar learning capabilities in an abstract Visual task match requirements for linguistic syntax
    Frontiers in Psychology, 2018
    Co-Authors: Gesche Westphalfitch, Beatrice Giustolisi, Carlo Cecchetto, Jordan S Martin, Tecumseh W Fitch
    Abstract:

    Whether pattern-parsing mechanisms are specific to language or apply across multiple cognitive domains remains unresolved. Formal language theory provides a mathematical framework for classifying pattern-generating rule sets (or "Grammars") according to complexity. This framework applies to patterns at any level of complexity, stretching from simple sequences, to highly complex tree-like or net-like structures, to any Turing-computable set of strings. Here, we explored human pattern-processing capabilities in the Visual domain by generating abstract Visual sequences made up of abstract tiles differing in form and color. We constructed different sets of sequences, using artificial "Grammars" (rule sets) at three key complexity levels. Because human linguistic syntax is classed as "mildly context-sensitive," we specifically included a Visual Grammar at this complexity level. Acquisition of these three Grammars was tested in an artificial Grammar-learning paradigm: after exposure to a set of well-formed strings, participants were asked to discriminate novel grammatical patterns from non-grammatical patterns. Participants successfully acquired all three Grammars after only minutes of exposure, correctly generalizing to novel stimuli and to novel stimulus lengths. A Bayesian analysis excluded multiple alternative hypotheses and shows that the success in rule acquisition applies both at the group level and for most participants analyzed individually. These experimental results demonstrate rapid pattern learning for abstract Visual patterns, extending to the mildly context-sensitive level characterizing language. We suggest that a formal equivalence of processing at the mildly context sensitive level in the Visual and linguistic domains implies that cognitive mechanisms with the computational power to process linguistic syntax are not specific to the domain of language, but extend to abstract Visual patterns with no meaning.

  • Table_1_Artificial Grammar Learning Capabilities in an Abstract Visual Task Match Requirements for Linguistic Syntax.DOC
    2018
    Co-Authors: Gesche Westphal-fitch, Beatrice Giustolisi, Carlo Cecchetto, Jordan S Martin, Tecumseh W Fitch
    Abstract:

    Whether pattern-parsing mechanisms are specific to language or apply across multiple cognitive domains remains unresolved. Formal language theory provides a mathematical framework for classifying pattern-generating rule sets (or “Grammars”) according to complexity. This framework applies to patterns at any level of complexity, stretching from simple sequences, to highly complex tree-like or net-like structures, to any Turing-computable set of strings. Here, we explored human pattern-processing capabilities in the Visual domain by generating abstract Visual sequences made up of abstract tiles differing in form and color. We constructed different sets of sequences, using artificial “Grammars” (rule sets) at three key complexity levels. Because human linguistic syntax is classed as “mildly context-sensitive,” we specifically included a Visual Grammar at this complexity level. Acquisition of these three Grammars was tested in an artificial Grammar-learning paradigm: after exposure to a set of well-formed strings, participants were asked to discriminate novel grammatical patterns from non-grammatical patterns. Participants successfully acquired all three Grammars after only minutes of exposure, correctly generalizing to novel stimuli and to novel stimulus lengths. A Bayesian analysis excluded multiple alternative hypotheses and shows that the success in rule acquisition applies both at the group level and for most participants analyzed individually. These experimental results demonstrate rapid pattern learning for abstract Visual patterns, extending to the mildly context-sensitive level characterizing language. We suggest that a formal equivalence of processing at the mildly context sensitive level in the Visual and linguistic domains implies that cognitive mechanisms with the computational power to process linguistic syntax are not specific to the domain of language, but extend to abstract Visual patterns with no meaning.

Carlo Cecchetto - One of the best experts on this subject based on the ideXlab platform.

  • artificial Grammar learning capabilities in an abstract Visual task match requirements for linguistic syntax
    Frontiers in Psychology, 2018
    Co-Authors: Gesche Westphalfitch, Beatrice Giustolisi, Carlo Cecchetto, Jordan S Martin, Tecumseh W Fitch
    Abstract:

    Whether pattern-parsing mechanisms are specific to language or apply across multiple cognitive domains remains unresolved. Formal language theory provides a mathematical framework for classifying pattern-generating rule sets (or "Grammars") according to complexity. This framework applies to patterns at any level of complexity, stretching from simple sequences, to highly complex tree-like or net-like structures, to any Turing-computable set of strings. Here, we explored human pattern-processing capabilities in the Visual domain by generating abstract Visual sequences made up of abstract tiles differing in form and color. We constructed different sets of sequences, using artificial "Grammars" (rule sets) at three key complexity levels. Because human linguistic syntax is classed as "mildly context-sensitive," we specifically included a Visual Grammar at this complexity level. Acquisition of these three Grammars was tested in an artificial Grammar-learning paradigm: after exposure to a set of well-formed strings, participants were asked to discriminate novel grammatical patterns from non-grammatical patterns. Participants successfully acquired all three Grammars after only minutes of exposure, correctly generalizing to novel stimuli and to novel stimulus lengths. A Bayesian analysis excluded multiple alternative hypotheses and shows that the success in rule acquisition applies both at the group level and for most participants analyzed individually. These experimental results demonstrate rapid pattern learning for abstract Visual patterns, extending to the mildly context-sensitive level characterizing language. We suggest that a formal equivalence of processing at the mildly context sensitive level in the Visual and linguistic domains implies that cognitive mechanisms with the computational power to process linguistic syntax are not specific to the domain of language, but extend to abstract Visual patterns with no meaning.

  • Table_1_Artificial Grammar Learning Capabilities in an Abstract Visual Task Match Requirements for Linguistic Syntax.DOC
    2018
    Co-Authors: Gesche Westphal-fitch, Beatrice Giustolisi, Carlo Cecchetto, Jordan S Martin, Tecumseh W Fitch
    Abstract:

    Whether pattern-parsing mechanisms are specific to language or apply across multiple cognitive domains remains unresolved. Formal language theory provides a mathematical framework for classifying pattern-generating rule sets (or “Grammars”) according to complexity. This framework applies to patterns at any level of complexity, stretching from simple sequences, to highly complex tree-like or net-like structures, to any Turing-computable set of strings. Here, we explored human pattern-processing capabilities in the Visual domain by generating abstract Visual sequences made up of abstract tiles differing in form and color. We constructed different sets of sequences, using artificial “Grammars” (rule sets) at three key complexity levels. Because human linguistic syntax is classed as “mildly context-sensitive,” we specifically included a Visual Grammar at this complexity level. Acquisition of these three Grammars was tested in an artificial Grammar-learning paradigm: after exposure to a set of well-formed strings, participants were asked to discriminate novel grammatical patterns from non-grammatical patterns. Participants successfully acquired all three Grammars after only minutes of exposure, correctly generalizing to novel stimuli and to novel stimulus lengths. A Bayesian analysis excluded multiple alternative hypotheses and shows that the success in rule acquisition applies both at the group level and for most participants analyzed individually. These experimental results demonstrate rapid pattern learning for abstract Visual patterns, extending to the mildly context-sensitive level characterizing language. We suggest that a formal equivalence of processing at the mildly context sensitive level in the Visual and linguistic domains implies that cognitive mechanisms with the computational power to process linguistic syntax are not specific to the domain of language, but extend to abstract Visual patterns with no meaning.

Jordan S Martin - One of the best experts on this subject based on the ideXlab platform.

  • artificial Grammar learning capabilities in an abstract Visual task match requirements for linguistic syntax
    Frontiers in Psychology, 2018
    Co-Authors: Gesche Westphalfitch, Beatrice Giustolisi, Carlo Cecchetto, Jordan S Martin, Tecumseh W Fitch
    Abstract:

    Whether pattern-parsing mechanisms are specific to language or apply across multiple cognitive domains remains unresolved. Formal language theory provides a mathematical framework for classifying pattern-generating rule sets (or "Grammars") according to complexity. This framework applies to patterns at any level of complexity, stretching from simple sequences, to highly complex tree-like or net-like structures, to any Turing-computable set of strings. Here, we explored human pattern-processing capabilities in the Visual domain by generating abstract Visual sequences made up of abstract tiles differing in form and color. We constructed different sets of sequences, using artificial "Grammars" (rule sets) at three key complexity levels. Because human linguistic syntax is classed as "mildly context-sensitive," we specifically included a Visual Grammar at this complexity level. Acquisition of these three Grammars was tested in an artificial Grammar-learning paradigm: after exposure to a set of well-formed strings, participants were asked to discriminate novel grammatical patterns from non-grammatical patterns. Participants successfully acquired all three Grammars after only minutes of exposure, correctly generalizing to novel stimuli and to novel stimulus lengths. A Bayesian analysis excluded multiple alternative hypotheses and shows that the success in rule acquisition applies both at the group level and for most participants analyzed individually. These experimental results demonstrate rapid pattern learning for abstract Visual patterns, extending to the mildly context-sensitive level characterizing language. We suggest that a formal equivalence of processing at the mildly context sensitive level in the Visual and linguistic domains implies that cognitive mechanisms with the computational power to process linguistic syntax are not specific to the domain of language, but extend to abstract Visual patterns with no meaning.

  • Table_1_Artificial Grammar Learning Capabilities in an Abstract Visual Task Match Requirements for Linguistic Syntax.DOC
    2018
    Co-Authors: Gesche Westphal-fitch, Beatrice Giustolisi, Carlo Cecchetto, Jordan S Martin, Tecumseh W Fitch
    Abstract:

    Whether pattern-parsing mechanisms are specific to language or apply across multiple cognitive domains remains unresolved. Formal language theory provides a mathematical framework for classifying pattern-generating rule sets (or “Grammars”) according to complexity. This framework applies to patterns at any level of complexity, stretching from simple sequences, to highly complex tree-like or net-like structures, to any Turing-computable set of strings. Here, we explored human pattern-processing capabilities in the Visual domain by generating abstract Visual sequences made up of abstract tiles differing in form and color. We constructed different sets of sequences, using artificial “Grammars” (rule sets) at three key complexity levels. Because human linguistic syntax is classed as “mildly context-sensitive,” we specifically included a Visual Grammar at this complexity level. Acquisition of these three Grammars was tested in an artificial Grammar-learning paradigm: after exposure to a set of well-formed strings, participants were asked to discriminate novel grammatical patterns from non-grammatical patterns. Participants successfully acquired all three Grammars after only minutes of exposure, correctly generalizing to novel stimuli and to novel stimulus lengths. A Bayesian analysis excluded multiple alternative hypotheses and shows that the success in rule acquisition applies both at the group level and for most participants analyzed individually. These experimental results demonstrate rapid pattern learning for abstract Visual patterns, extending to the mildly context-sensitive level characterizing language. We suggest that a formal equivalence of processing at the mildly context sensitive level in the Visual and linguistic domains implies that cognitive mechanisms with the computational power to process linguistic syntax are not specific to the domain of language, but extend to abstract Visual patterns with no meaning.

James C Tilton - One of the best experts on this subject based on the ideXlab platform.

  • learning bayesian classifiers for scene classification with a Visual Grammar
    IEEE Transactions on Geoscience and Remote Sensing, 2005
    Co-Authors: Selim Aksoy, Krzysztof Koperski, Carsten Tusk, Giovanni Marchisio, James C Tilton
    Abstract:

    A challenging problem in image content extraction and classification is building a system that automatically learns high-level semantic interpretations of images. We describe a Bayesian framework for a Visual Grammar that aims to reduce the gap between low-level features and high-level user semantics. Our approach includes modeling image pixels using automatic fusion of their spectral, textural, and other ancillary attributes; segmentation of image regions using an iterative split-and-merge algorithm; and representing scenes by decomposing them into prototype regions and modeling the interactions between these regions in terms of their spatial relationships. Naive Bayes classifiers are used in the learning of models for region segmentation and classification using positive and negative examples for user-defined semantic land cover labels. The system also automatically learns representative region groups that can distinguish different scenes and builds Visual Grammar models. Experiments using Landsat scenes show that the Visual Grammar enables creation of high-level classes that cannot be modeled by individual pixels or regions. Furthermore, learning of the classifiers requires only a few training examples.

  • learning bayesian classifiers for a Visual Grammar
    IEEE Workshop on Advances in Techniques for Analysis of Remotely Sensed Data 2003, 2003
    Co-Authors: Selim Aksoy, Krzysztof Koperski, Carsten Tusk, Giovanni Marchisio, James C Tilton
    Abstract:

    A challenging problem in image content extraction and classification is building a system that automatically learns high-level semantic interpretations of images. We describe a Bayesian framework for a Visual Grammar that aims to reduce the gap between low-level features and user semantics. Our approach includes learning prototypes of regions and their spatial relationships for scene classification. First, naive Bayes classifiers perform automatic fusion of features and learn models for region segmentation and classification using positive and negative examples for user-defined semantic land cover labels. Then, the system automatically learns how to distinguish the spatial relationships of these regions from training data and builds Visual Grammar models. Experiments using LANDSAT scenes show that the Visual Grammar enables creation of higher level classes that cannot be modeled by individual pixels or regions. Furthermore, learning of the classifiers requires only a few training examples.