Prototype Theory

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 37089 Experts worldwide ranked by ideXlab platform

Jonathan Lawry - One of the best experts on this subject based on the ideXlab platform.

  • A bipolar model of vague concepts based on random set and Prototype Theory
    International Journal of Approximate Reasoning, 2012
    Co-Authors: Yongchuan Tang, Jonathan Lawry
    Abstract:

    AbstractWe argue that vagueness is a multi-faceted phenomenon requiring a framework for concept representation incorporating aspects of typicality, semantic uncertainty and indeterminism. In this paper we propose a bipolar model for vague concepts within the framework of Prototype Theory where concepts are represented by prototypical regions of an underlying conceptual space, and in which the appropriateness of a concept label to describe a given instance is determined on the basis of both a lower and an upper threshold on the distance from the defining Prototype. Essentially, the label is absolutely appropriate as a description, providing that the distance to the Prototype is less than the lower threshold, and not absolutely inappropriate if it is less than the upper threshold. Hence, in effect a concept is defined by lower and upper neighbourhoods of the Prototype within the conceptual space, and the borderline region between the neighbourhoods identifies those elements of the space for which the concept label is neither absolutely appropriate nor absolutely inappropriate to describe. Semantic uncertainty is then represented by a joint probability density function on the lower and upper thresholds so that the lower and upper neighbourhoods correspond to nested random sets. This naturally results in lower and upper appropriateness measures quantifying the belief that a concept label is absolutely appropriate and not absolutely inappropriate to describe a given element of the space. These measures can then be related to the random set interpretation of fuzzy sets and in particular to lower and upper membership functions in interval fuzzy set Theory

  • a random set and Prototype Theory interpretation of intuitionistic fuzzy sets
    International Conference Information Processing, 2010
    Co-Authors: Jonathan Lawry
    Abstract:

    An interpretation of intuitionistic fuzzy sets is proposed based on random set Theory and Prototype Theory. The extension of fuzzy labels are modelled by lower and upper random set neighbourhoods, identifying those element of the universe within an uncertain distance threshold of a set of prototypical elements. These neighbourhoods are then generalised to compound fuzzy descriptions generated as logical combinations of basic fuzzy labels. The single point coverage functions of these lower and upper random sets are then shown to generate lower and upper membership functions satisfying the min-max combination rules of interval fuzzy set Theory, the latter being isomophic to intuitionistic fuzzy set Theory.

  • Uncertainty modelling for vague concepts: A Prototype Theory approach
    Artificial Intelligence, 2009
    Co-Authors: Jonathan Lawry, Yongchuan Tang
    Abstract:

    AbstractAn epistemic model of the uncertainty associated with vague concepts is introduced. Label semantics Theory is proposed as a framework for quantifying an agent's uncertainty concerning what labels are appropriate to describe a given example. An interpretation of label semantics is then proposed which incorporates Prototype Theory by introducing uncertain thresholds on the distance between elements and Prototypes for description labels. This interpretation naturally generates a functional calculus for appropriateness measures. A more general model with distinct threshold variables for different labels is discussed and we show how different kinds of semantic dependence can be captured in this model

  • relating Prototype Theory and label semantics
    Soft Methods in Probability and Statistics, 2008
    Co-Authors: Jonathan Lawry, Yongchuan Tang
    Abstract:

    An interpretation of the label semantics framework is introduced based on Prototype Theory. Within this interpretation it is shown that the appropriateness of an expression is characterised by an interval constraints on a parameter e. Here e is an uncertain distance threshold according to which an element x is sufficiently close to the Prototype p i of a label L i for L i to be deemed appropriate to describe x, if the distance between x and p i is less than or equal to e. Appropriateness measures and mass functions are then defined in terms of an underlying probability density function δ on e.

  • Scalable Fuzzy Algorithms for Data Management and Analysis - A Random Set and Prototype Theory Model of Linguistic Query Evaluation
    Scalable Fuzzy Algorithms for Data Management and Analysis, 1
    Co-Authors: Jonathan Lawry, Yongchuan Tang
    Abstract:

    This chapter proposes a new interpretation of quantified linguistic queries based on a combination of random set Theory and Prototype Theory and which is consistent with the label semantics framework. In this approach concepts are defined by random set neighbourhoods of a set of Prototypes and quantifiers are similarly defined by random set constraints on ratios or absolute values. The authors then propose a computationally feasible method for evaluating quantified statement describing the elements of a database.

Yongchuan Tang - One of the best experts on this subject based on the ideXlab platform.

  • Prototype Theory for Learning
    Uncertainty Modeling for Data Mining, 2014
    Co-Authors: Zengchang Qin, Yongchuan Tang
    Abstract:

    Assume that X is the input variable defined on the domain ℝ k , and Y is the output variable defined on the domain ℝ. Now assume that we have a training data set DB = {(x j 1 ,…, x j k ,y j ) : j = 1,…, N}. We now consider how to derive a linguistic rule base from this training data set, which can fit this training data set accurately and at the same time has a high generalization capability. In the following we firstly propose a rule induction method which is very simple and natural. Then in order to improve the generalization capability of the rule base we present a clustering based method to coarsen the rule base.

  • A Prototype Theory Interpretation of Label Semantics
    Uncertainty Modeling for Data Mining, 2014
    Co-Authors: Zengchang Qin, Yongchuan Tang
    Abstract:

    Using words rather than numbers to convey vague information as part of uncertain reasoning is a sophisticated human activity. The Theory of fuzzy sets is now a popular tool for computing with words[12] which attempts to formally capture this human reasoning process[3–4] Furthermore, linguistic modeling based on fuzzy IF-THEN rules [6–8] has achieved promising results in many application areas. However, the currently proposed interpretations of the membership function in fuzzy set Theory are not consistent with the truth-functional calculus of fuzzy logic[9]. Alternatively, from the philosophical viewpoint of the epistemic stance, Lawry proposed a functional (but non-truth functional) calculus, label semantics, for computing with words[10,11]. In this framework, the meaning of linguistic labels is encoded by mass functions which represent the subjective probabilities that a given set of labels is appropriate to describe a given instance. Label semantics is a powerful new tool for modelling with vague concepts, the possible applications of which include knowledge fusion[12], decision tree learning[13], linguistic rule induction[14], and collective decision making[15,16].

  • A bipolar model of vague concepts based on random set and Prototype Theory
    International Journal of Approximate Reasoning, 2012
    Co-Authors: Yongchuan Tang, Jonathan Lawry
    Abstract:

    AbstractWe argue that vagueness is a multi-faceted phenomenon requiring a framework for concept representation incorporating aspects of typicality, semantic uncertainty and indeterminism. In this paper we propose a bipolar model for vague concepts within the framework of Prototype Theory where concepts are represented by prototypical regions of an underlying conceptual space, and in which the appropriateness of a concept label to describe a given instance is determined on the basis of both a lower and an upper threshold on the distance from the defining Prototype. Essentially, the label is absolutely appropriate as a description, providing that the distance to the Prototype is less than the lower threshold, and not absolutely inappropriate if it is less than the upper threshold. Hence, in effect a concept is defined by lower and upper neighbourhoods of the Prototype within the conceptual space, and the borderline region between the neighbourhoods identifies those elements of the space for which the concept label is neither absolutely appropriate nor absolutely inappropriate to describe. Semantic uncertainty is then represented by a joint probability density function on the lower and upper thresholds so that the lower and upper neighbourhoods correspond to nested random sets. This naturally results in lower and upper appropriateness measures quantifying the belief that a concept label is absolutely appropriate and not absolutely inappropriate to describe a given element of the space. These measures can then be related to the random set interpretation of fuzzy sets and in particular to lower and upper membership functions in interval fuzzy set Theory

  • Uncertainty modelling for vague concepts: A Prototype Theory approach
    Artificial Intelligence, 2009
    Co-Authors: Jonathan Lawry, Yongchuan Tang
    Abstract:

    AbstractAn epistemic model of the uncertainty associated with vague concepts is introduced. Label semantics Theory is proposed as a framework for quantifying an agent's uncertainty concerning what labels are appropriate to describe a given example. An interpretation of label semantics is then proposed which incorporates Prototype Theory by introducing uncertain thresholds on the distance between elements and Prototypes for description labels. This interpretation naturally generates a functional calculus for appropriateness measures. A more general model with distinct threshold variables for different labels is discussed and we show how different kinds of semantic dependence can be captured in this model

  • relating Prototype Theory and label semantics
    Soft Methods in Probability and Statistics, 2008
    Co-Authors: Jonathan Lawry, Yongchuan Tang
    Abstract:

    An interpretation of the label semantics framework is introduced based on Prototype Theory. Within this interpretation it is shown that the appropriateness of an expression is characterised by an interval constraints on a parameter e. Here e is an uncertain distance threshold according to which an element x is sufficiently close to the Prototype p i of a label L i for L i to be deemed appropriate to describe x, if the distance between x and p i is less than or equal to e. Appropriateness measures and mass functions are then defined in terms of an underlying probability density function δ on e.

Kirk St Amant - One of the best experts on this subject based on the ideXlab platform.

  • mapping the variables of care in health and medical communication contexts a script Theory Prototype Theory approach to patient centered design
    International Conference on Design of Communication, 2017
    Co-Authors: Kirk St Amant
    Abstract:

    Script Theory views communication contexts as sequences of standard processes - or scripts - humans use to move through different contexts in their daily lives. These scripts contain variables that influence how individuals expect to access and use materials in different settings. Prototype Theory, in turn, addresses user expectations of what items/variables should look like in a given context, thus creating a guide for how one should design/depict items when developing materials for that context. When combined, these two theoretical approaches create a mechanism UXD professionals can use to both research communication and design expectations/variables in different health and medical communication contexts and design materials that meet user expectations associated with such settings.

  • SIGDOC - Mapping the variables of care in health and medical communication contexts: a script Theory-Prototype Theory approach to patient-centered design
    Proceedings of the 35th ACM International Conference on the Design of Communication, 2017
    Co-Authors: Kirk St Amant
    Abstract:

    Script Theory views communication contexts as sequences of standard processes - or scripts - humans use to move through different contexts in their daily lives. These scripts contain variables that influence how individuals expect to access and use materials in different settings. Prototype Theory, in turn, addresses user expectations of what items/variables should look like in a given context, thus creating a guide for how one should design/depict items when developing materials for that context. When combined, these two theoretical approaches create a mechanism UXD professionals can use to both research communication and design expectations/variables in different health and medical communication contexts and design materials that meet user expectations associated with such settings.

  • a Prototype Theory approach to internationalizing information design in health and medical communication
    International Conference on Design of Communication, 2015
    Co-Authors: Kirk St Amant
    Abstract:

    As health and medical communication become increasingly international in scope, information designers need to find ways to create visuals that best address the expectations of different cultural audiences. Doing so can be challenging, particularly if the related content is on health or medical topics. Prototype Theory, however, can provide a foundation for a framework information designers can use to better understand such issues. This entry provides an overview of how Prototype Theory can be used to address such factors.

  • SIGDOC - A Prototype Theory approach to internationalizing information design in health and medical communication
    Proceedings of the 33rd Annual International Conference on the Design of Communication, 2015
    Co-Authors: Kirk St Amant
    Abstract:

    As health and medical communication become increasingly international in scope, information designers need to find ways to create visuals that best address the expectations of different cultural audiences. Doing so can be challenging, particularly if the related content is on health or medical topics. Prototype Theory, however, can provide a foundation for a framework information designers can use to better understand such issues. This entry provides an overview of how Prototype Theory can be used to address such factors.

  • a Prototype Theory approach to international website analysis and design
    Technical Communication Quarterly, 2005
    Co-Authors: Kirk St Amant
    Abstract:

    As global online access grows, Web site designers find themselves creating materials for an increasing international audience. Cultural groups, however, can have different expectations of what constitutes acceptable Web site design. This article examines how Prototype Theory can serve as a methodology for analyzing Web sites designed for users from different cultures. Such analyses, in turn, can help individuals create more effective online materials for international audiences.

Diederik Aerts - One of the best experts on this subject based on the ideXlab platform.

  • Generalizing Prototype Theory: A formal quantum framework
    Frontiers in psychology, 2016
    Co-Authors: Diederik Aerts, Jean Broekaert, Liane Gabora, Sandro Sozzo
    Abstract:

    Theories of natural language and concepts have been unable to model the flexibility, creativity, context-dependence, and emergence, exhibited by words, concepts and their combinations. The mathematical formalism of quantum Theory has instead been successful in capturing these phenomena such as graded membership, situational meaning, composition of categories, and also more complex decision making situations, which cannot be modeled in traditional probabilistic approaches. We show how a formal quantum approach to concepts and their combinations can provide a powerful extension of Prototype Theory. We explain how Prototypes can interfere in conceptual combinations as a consequence of their contextual interactions, and provide an illustration of this using an intuitive wave-like diagram. This quantum-conceptual approach gives new life to original Prototype Theory, without however making it a privileged concept Theory, as we explain at the end of our paper.

  • Quantum and Concept Combination, Entangled Measurements and Prototype Theory
    Topics in cognitive science, 2014
    Co-Authors: Diederik Aerts
    Abstract:

    We analyze the meaning of the violation of the marginal probability law for situations of correlation measurements where entanglement is identified. We show that for quantum Theory applied to the cognitive realm such a violation does not lead to the type of problems commonly believed to occur in situations of quantum Theory applied to the physical realm. We briefly situate our quantum approach for modeling concepts and their combinations with respect to the notions of 'extension' and 'intension' in theories of meaning, and in existing concept theories.

Dylan Glynn - One of the best experts on this subject based on the ideXlab platform.

  • Quantifying polysemy: Corpus methodology for Prototype Theory
    Folia Linguistica, 2016
    Co-Authors: Dylan Glynn
    Abstract:

    AbstractThis study addresses the methodological problem of result falsification in Cognitive Semantics, specifically in the descriptive analysis of semasiological variation, or “polysemy”. It argues that manually analysed corpus data can be used to describe models of semantic structure. The method proposed is quantified, permitting repeat analysis. The operationalisation of a semasiological structure employed in the study takes the principle of semantic features and applies them to a contextual analysis of usage-events, associated with the lexeme under scrutiny. The feature analysis, repeated on a large collection of occurrences, results in a set of metadata that constitutes the usage-profile of the lexeme. Multivariate statistics are then employed to identify patterns in those metadata. The case study examines 500 occurrences of the English lexeme

  • quantifying polysemy corpus methodology for Prototype Theory
    Folia Linguistica, 2016
    Co-Authors: Dylan Glynn
    Abstract:

    This study addresses the methodological problem of result falsification in Cognitive Semantics, specifically in the descriptive analysis of semasiological variation, or “polysemy.” It argues that manually analysed corpus data can be used to describe models of semantic structure. The method proposed is quantified, permitting repeat analysis. The operationalisation of a semasiological structure employed in the study takes the principle of semantic features and applies them to a contextual analysis of usage-events, associated with the lexeme under scrutiny. The feature analysis, repeated on a large collection of occurrences, results in a set of metadata that constitutes the usage-profile of the lexeme. Multivariate statistics are then employed to identify patterns in those metadata. The case study examines 500 occurrences of the English lexeme annoy. Three basic senses are identified as well as a more complex array of semantic variations linked to morpho-syntactic context of usage.