Rational Analysis

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 324 Experts worldwide ranked by ideXlab platform

Thomas L. Griffiths - One of the best experts on this subject based on the ideXlab platform.

  • Compositionality in Rational Analysis: grammar-based induction for concept learning
    2020
    Co-Authors: Noah D. Goodman, Thomas L. Griffiths, Joshua B. Tenenbaum, Jacob Feldman
    Abstract:

    Rational Analysis attempts to explain aspects of human cognition as an adaptive response to the environment (Marr, 1982; Anderson, 1990; Chater, Tenenbaum, & Yuille, 2006). The dominant approach to Rational Analysis today takes an ecologically reasonable specification of a problem facing an organism, given in statistical terms, then seeks an optimal solution, usually using Bayesian methods. This approach has proven very successful in cognitive science; it has predicted perceptual phenomena (Geisler & Kersten, 2002; Feldman, 2001), illuminated puzzling effects in reasoning (Chater & Oaksford, 1999; Griffiths & Tenenbaum, 2006), and, especially, explained how human learning can succeed despite sparse input and endemic uncertainty (Tenenbaum, 1999; Tenenbaum & Griffiths, 2001). However, there were earlier notions of the “RationalAnalysis of cognition that emphasized very different ideas. One of the central ideas behind logical and computational approaches, which previously dominated notions of Rationality, is that meaning can be captured in the structure of representations, but that compositional semantics are needed for these representations to provide a coherent account of thought. In this chapter we attempt to reconcile the modern approach to Rational Analysis with some aspects of this older, logico-computational approach. We do this via a model—offered as an extended example—of human concept learning. In the current chapter we are primarily concerned with formal aspects of this approach; in other work (Goodman, Tenenbaum, Feldman, & Griffiths, in press) we more carefully study a variant of this model as a psychological model of human concept learning. Explaining human cognition was one of the original motivations for the development of formal logic. George Boole, the father of digital logic, developed his symbolic language in order to explicate the Rational laws underlying thought: his principal work, An Investigation of the Laws of Thought (Boole, 1854), was written to “investigate the fundamental laws of those operations of the mind by which reasoning is performed,” and arrived at “some probable intimations concerning the nature and constitution of the human mind” (p. 1). Much of mathematical logic since Boole can be regarded as an attempt to capture the coherence of thought in a formal system. This is particularly apparent in the work, by Frege (1892), Tarski (1956) and others, on model-theoretic semantics for logic, which aimed to create formal systems both flexible and systematic enough to capture the complexities of mathematical thought. A central component in this program is compositionality. Consider Frege’s Principle1: each syntactic operation of a formal language should have a corresponding semantic operation. This principle requires syntactic compositionality, that meaningful terms in a formal system are built up by combination operations, as well as compatibility between the syntax and semantics of the system. When Turing, Church, and others suggested that formal systems could be manipulated by mechanical computers it was natural (at least in hindsight) to suggest that cognition operates in a similar way: meaning is manipulated in the mind by computation2. Viewing the mind as a formal computational system in this way suggests that compositionality should also be found in the mind; that is, that mental representations may be combined into new representations, and the meaning of mental representations may be decomposed in terms of the meaning of their components. Two important virtues for a theory of thought result (Fodor, 1975): productivity—the number of representations is unbounded because they may be boundlessly combined— and systematicity—the combination of two representations is meaningful to one who can understand each separately. Despite its importance to the computational theory of mind, compositionality has seldom been captured by modern Rational analyses. Yet there are a number of reasons to desire a compositional Rational Analysis. For instance, productivity of mental representations would provide an explanation of the otherwise puzzling ability of human thought to adapt to novel situations populated by new concepts—even those far beyond the ecological pressures of our evolutionary milieu (such as radiator repairs and the use of fiberglass bottom powerboats). We will show in this chapter that Bayesian statistical methods can be fruitfully combined with compositional representational systems by developing such a model in the well-studied setting of concept learning. This addresses a long running tension in the literature on human concepts: similarity-based statistical learning models have provided a good understanding of how simple concepts can be learned (Medin & Schaffer, 1978; Anderson, 1991; Kruschke, 1992;

  • reconciling novelty and complexity through a Rational Analysis of curiosity
    Psychological Review, 2020
    Co-Authors: Rachit Dubey, Thomas L. Griffiths
    Abstract:

    : Curiosity is considered to be the essence of science and an integral component of cognition. What prompts curiosity in a learner? Previous theoretical accounts of curiosity remain divided-novelty-based theories propose that new and highly uncertain stimuli pique curiosity, whereas complexity-based theories propose that stimuli with an intermediate degree of uncertainty stimulate curiosity. In this article, we present a Rational Analysis of curiosity by considering the computational problem underlying curiosity, which allows us to model these distinct accounts of curiosity in a common framework. Our approach posits that a Rational agent should explore stimuli that maximally increase the usefulness of its knowledge and that curiosity is the mechanism by which humans approximate this Rational behavior. Critically, our Analysis show that the causal structure of the environment can determine whether curiosity is driven by either highly uncertain or moderately uncertain stimuli. This suggests that previous theories need not be in contention but are special cases of a more general account of curiosity. Experimental results confirm our predictions and demonstrate that our theory explains a wide range of findings about human curiosity, including its subjectivity and malleability. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

  • advancing Rational Analysis to the algorithmic level
    Behavioral and Brain Sciences, 2020
    Co-Authors: Falk Lieder, Thomas L. Griffiths
    Abstract:

    : The commentaries raised questions about normativity, human Rationality, cognitive architectures, cognitive constraints, and the scope or resource Rational Analysis (RRA). We respond to these questions and clarify that RRA is a methodological advance that extends the scope of Rational modeling to understanding cognitive processes, why they differ between people, why they change over time, and how they could be improved.

  • resource Rational Analysis understanding human cognition as the optimal use of limited computational resources
    Behavioral and Brain Sciences, 2020
    Co-Authors: Falk Lieder, Thomas L. Griffiths
    Abstract:

    Modeling human cognition is challenging because there are infinitely many mechanisms that can generate any given observation. Some researchers address this by constraining the hypothesis space through assumptions about what the human mind can and cannot do, while others constrain it through principles of Rationality and adaptation. Recent work in economics, psychology, neuroscience, and linguistics has begun to integrate both approaches by augmenting Rational models with cognitive constraints, incorporating Rational principles into cognitive architectures, and applying optimality principles to understanding neural representations. We identify the Rational use of limited resources as a unifying principle underlying these diverse approaches, expressing it in a new cognitive modeling paradigm called resource-Rational Analysis . The integration of Rational principles with realistic cognitive constraints makes resource-Rational Analysis a promising framework for reverse-engineering cognitive mechanisms and representations. It has already shed new light on the debate about human Rationality and can be leveraged to revisit classic questions of cognitive psychology within a principled computational framework. We demonstrate that resource-Rational models can reconcile the mind's most impressive cognitive skills with people's ostensive irRationality. Resource-Rational Analysis also provides a new way to connect psychological theory more deeply with artificial intelligence, economics, neuroscience, and linguistics.

  • A Rational Analysis of curiosity
    arXiv: Artificial Intelligence, 2017
    Co-Authors: Rachit Dubey, Thomas L. Griffiths
    Abstract:

    We present a Rational Analysis of curiosity, proposing that people's curiosity is driven by seeking stimuli that maximize their ability to make appropriate responses in the future. This perspective offers a way to unify previous theories of curiosity into a single framework. Experimental results confirm our model's predictions, showing how the relationship between curiosity and confidence can change significantly depending on the nature of the environment.

N. Chater - One of the best experts on this subject based on the ideXlab platform.

  • Rational Analysis and Language Processing
    Encyclopedia of Language & Linguistics, 2020
    Co-Authors: N. Chater
    Abstract:

    Explanation in terms of Rationality is widespread in biology and the social sciences. How far is this possible for language processing? We explore this question using the framework of Rational Analysis, initially articulated in cognitive psychology. Explanations in terms of Rational Analysis specify (1) a goal; (2) an environment; and (3) computational constraints. The hope is that the phenomena to be explained can be viewed as a optimizing the goal, given environmental and cognitive constraints. Illustrations of the approach are chosen from the lexicon, parsing, and learnability. We consider the future prospects for Rational Analysis.

  • Rationality, Rational Analysis and human reasoning
    2020
    Co-Authors: N. Chater, Mike Oaksford
    Abstract:

    Book synopsis: This collection brings together a set of specially commissioned chapters from leading international researchers in the psychology of reasoning. Its purpose is to explore the historical, philosophical and theoretical implications of the development of this field. Taking the unusual approach of engaging not only with empirical data but also with the ideas and concepts underpinning the psychology of reasoning, this volume has important implications both for psychologists and other students of cognition, including philosophers. Sub-fields covered include mental logic, mental models, Rational Analysis, social judgement theory, game theory and evolutionary theory. There are also specific chapters dedicated to the history of syllogistic reasoning, the psychology of reasoning as it operates in scientific theory and practice, Brunswickian approaches to reasoning and task environments, and the implications of Popper's philosophy for models of behaviour testing. This cross-disciplinary dialogue and the range of material covered makes this an invaluable reference for students and researchers into the psychology and philosophy of reasoning.

  • A Cognitively Bounded Rational Analysis Model of Dual-Task Performance Trade-Offs
    2010
    Co-Authors: Christian P. Janssen, Duncan P. Brumby, John Dowell, N. Chater
    Abstract:

    The process of interleaving two tasks can be described as making trade-offs between performance on each of the tasks. This can be captured in performance operating characteristic curves. However, these curves do not describe what, given the specific task circumstances, the optimal strategy is. In this paper we describe the results of a dual-task study in which participants performed a tracking and typing task under various experimental conditions. An objective payoff function was used to describe how participants should trade-off performance between the tasks. Results show that participants' dual-task interleaving strategy was sensitive to changes in the difficulty of the tracking task, and resulted in differences in overall task performance. To explain the observed behavior, a cognitively bounded Rational Analysis model was developed to understand participants' strategy selection. This Analysis evaluated a variety of dual-task interleaving strategies against the same payoff function that participants were exposed to. The model demonstrated that in three out of four conditions human performance was optimal; that is, participants adopted dual-task strategies that maximized the payoff that was achieved.

  • The Rational Analysis of human cognition
    2002
    Co-Authors: N. Chater, Mike Oaksford
    Abstract:

    Book synopsis: Reason and Nature investigates the normative dimension of reason and Rationality and how it can be situated within the natural world. Nine philosophers and two psychologists address three main themes. The first concerns the status of norms of Rationality and, in particular, how it is possible to show that norms we take to be objectively authoritative are so in fact. The second has to do with the precise form taken by the norms of Rationality. The third concerns the role of norms of Rationality in the psychological explanation of belief and action. It is widely assumed that we use the normative principles of Rationality as regulative principles governing psychological explanation. This seems to demand that there is a certain harmony between the norms of Rationality and the psychology of reasoning. What, then, should we make of the well-documented evidence suggesting that people consistently fail to reason well? And how can we extend the model to non-language-using creatures? As this collection testifies, current work in the theory of Rationality is subject to very diverse influences ranging from experimental and theoretical psychology, through philosophy of logic and language, to metaethics and the theory of practical reasoning. This work is pursued in various philosophical styles and with various orientations. Straight-down-the-line analytical, and largely a priori, enquiry contrasts with empirically constrained theorizing. A focus on human Rationality contrasts with a focus on Rationality in the wider natural world. As things stand work in one style often proceeds in isolation from work in others. If progress is to be made on Rationality theorists will need to range widely. Reason and Nature will provide a stimulus to that endeavour.

  • The Rational Analysis of mind and behavior
    Synthese, 2000
    Co-Authors: N. Chater, Mike Oaksford
    Abstract:

    Rational Analysis (Anderson 1990, 1991a) is an empiricalprogram of attempting to explain why the cognitive system isadaptive, with respect to its goals and the structure of itsenvironment. We argue that Rational Analysis has two importantimplications for philosophical debate concerning Rationality. First,Rational Analysis provides a model for the relationship betweenformal principles of Rationality (such as probability or decisiontheory) and everyday Rationality, in the sense of successfulthought and action in daily life. Second, applying the program ofRational Analysis to research on human reasoning leads to a radicalreinterpretation of empirical results which are typically viewed asdemonstrating human irRationality.

Noah D. Goodman - One of the best experts on this subject based on the ideXlab platform.

  • Compositionality in Rational Analysis: grammar-based induction for concept learning
    2020
    Co-Authors: Noah D. Goodman, Thomas L. Griffiths, Joshua B. Tenenbaum, Jacob Feldman
    Abstract:

    Rational Analysis attempts to explain aspects of human cognition as an adaptive response to the environment (Marr, 1982; Anderson, 1990; Chater, Tenenbaum, & Yuille, 2006). The dominant approach to Rational Analysis today takes an ecologically reasonable specification of a problem facing an organism, given in statistical terms, then seeks an optimal solution, usually using Bayesian methods. This approach has proven very successful in cognitive science; it has predicted perceptual phenomena (Geisler & Kersten, 2002; Feldman, 2001), illuminated puzzling effects in reasoning (Chater & Oaksford, 1999; Griffiths & Tenenbaum, 2006), and, especially, explained how human learning can succeed despite sparse input and endemic uncertainty (Tenenbaum, 1999; Tenenbaum & Griffiths, 2001). However, there were earlier notions of the “RationalAnalysis of cognition that emphasized very different ideas. One of the central ideas behind logical and computational approaches, which previously dominated notions of Rationality, is that meaning can be captured in the structure of representations, but that compositional semantics are needed for these representations to provide a coherent account of thought. In this chapter we attempt to reconcile the modern approach to Rational Analysis with some aspects of this older, logico-computational approach. We do this via a model—offered as an extended example—of human concept learning. In the current chapter we are primarily concerned with formal aspects of this approach; in other work (Goodman, Tenenbaum, Feldman, & Griffiths, in press) we more carefully study a variant of this model as a psychological model of human concept learning. Explaining human cognition was one of the original motivations for the development of formal logic. George Boole, the father of digital logic, developed his symbolic language in order to explicate the Rational laws underlying thought: his principal work, An Investigation of the Laws of Thought (Boole, 1854), was written to “investigate the fundamental laws of those operations of the mind by which reasoning is performed,” and arrived at “some probable intimations concerning the nature and constitution of the human mind” (p. 1). Much of mathematical logic since Boole can be regarded as an attempt to capture the coherence of thought in a formal system. This is particularly apparent in the work, by Frege (1892), Tarski (1956) and others, on model-theoretic semantics for logic, which aimed to create formal systems both flexible and systematic enough to capture the complexities of mathematical thought. A central component in this program is compositionality. Consider Frege’s Principle1: each syntactic operation of a formal language should have a corresponding semantic operation. This principle requires syntactic compositionality, that meaningful terms in a formal system are built up by combination operations, as well as compatibility between the syntax and semantics of the system. When Turing, Church, and others suggested that formal systems could be manipulated by mechanical computers it was natural (at least in hindsight) to suggest that cognition operates in a similar way: meaning is manipulated in the mind by computation2. Viewing the mind as a formal computational system in this way suggests that compositionality should also be found in the mind; that is, that mental representations may be combined into new representations, and the meaning of mental representations may be decomposed in terms of the meaning of their components. Two important virtues for a theory of thought result (Fodor, 1975): productivity—the number of representations is unbounded because they may be boundlessly combined— and systematicity—the combination of two representations is meaningful to one who can understand each separately. Despite its importance to the computational theory of mind, compositionality has seldom been captured by modern Rational analyses. Yet there are a number of reasons to desire a compositional Rational Analysis. For instance, productivity of mental representations would provide an explanation of the otherwise puzzling ability of human thought to adapt to novel situations populated by new concepts—even those far beyond the ecological pressures of our evolutionary milieu (such as radiator repairs and the use of fiberglass bottom powerboats). We will show in this chapter that Bayesian statistical methods can be fruitfully combined with compositional representational systems by developing such a model in the well-studied setting of concept learning. This addresses a long running tension in the literature on human concepts: similarity-based statistical learning models have provided a good understanding of how simple concepts can be learned (Medin & Schaffer, 1978; Anderson, 1991; Kruschke, 1992;

  • A Rational Analysis of Rule-based Concept Learning
    2017
    Co-Authors: Noah D. Goodman, Thomas L. Griffiths, Jacob Feldman, Joshua B. Tenenbaum
    Abstract:

    A Rational Analysis of Rule-based Concept Learning Noah D. Goodman 1 (ndg@mit.edu), Thomas Griffiths 2 (tom griffiths@berkeley.edu), Jacob Feldman 3 (jacob@ruccs.rutgers.edu), Joshua B. Tenenbaum 1 (jbt@mit.edu) 1 Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology 2 Department of Psychology, University of California, Berkeley 3 Department of Psychology, Center for Cognitive Science, Rutgers University Abstract However, existing rule-based models are primarily heuristic—no Rational Analysis has been provided, and they have not been tied to statistical approaches to induction. A ra- tional Analysis for rule-based models might assume that con- cepts are (represented as) rules, and ask what degree of be- lief a Rational agent should accord to each rule, given some observed examples. We answer this question by formulat- ing the hypothesis space of rules as words in a “concept lan- guage” generated by a context-free grammar. Considering the probability of productions in this grammar leads to a prior probability for words in the language, and the logical form of these words motivates an expression for the probability of observed examples given a rule. The methods of Bayesian Analysis then lead to the Rational Rules model of concept learning. This grammatical approach to induction has ben- efits for Bayesian Rational Analysis: it compactly specifies an infinite, and flexible, hypothesis space of structured rules and a prior that decreases with complexity. The Rational Rules model thus makes contributions to both rule-based concept modeling and Rational statistical learning models: to the for- mer it provides a Rational Analysis, and to the latter it provides the grammar-based approach. Across a range of experimen- tal tasks, this new model achieves comparable fits to the best rule-based models in the literature, but with fewer free pa- rameters and arbitrary processing assumptions. We propose a new model of human concept learning that pro- vides a Rational Analysis for learning of feature-based concepts. This model is built upon Bayesian inference for a grammat- ically structured hypothesis space—a “concept language” of logical rules. We compare the model predictions to human generalization judgments in two well-known category learning experiments, and find good agreement for both average and individual participants’ generalizations. Keywords: concept learning; categorization; Bayesian induc- tion; probabilistic grammar; rules. Introduction Concepts are a topic of perennial interest to psychology, par- ticularly concepts which identify kinds of things. Such con- cepts are mental representations which enable one to discrim- inate between objects that satisfy the concept and those which do not. Given their discriminative use, a natural hypothesis is that concepts are simply rules for classifying objects based on features. Indeed, the “classical” theory of concepts (see Smith and Medin, 1981) takes this viewpoint, suggesting that a concept can be expressed as a simple feature-based rule: a conjunction of features that are necessary and jointly suffi- cient for membership. Early models based on this approach failed to account for many aspects of human categorization behavior, especially the graded use of concepts (Mervis and Rosch, 1981). Attention consequently turned to models with a more statistical nature: similarity to prototypes or to exem- plars (Medin and Schaffer, 1978; Kruschke, 1992; Love et al., 2004). The statistical nature of many of these models has made them amenable to a Rational Analysis (Anderson, 1990), which attempts to explain why people do what they do, com- plementing (often apparently ad-hoc) process-level accounts. Despite the success of similarity-based models, recently re- newed interest has led to more sophisticated rule-based mod- els. Among the reasons for this reconsideration are the inabil- ity of similarity-based models to provide a method for con- cept combination, common reports by participants that they “feel as if” they are using a rule, and the unrealistic mem- ory demands of most similarity-based models. The RULEX model (Nosofsky et al., 1994), for instance, treats concepts as conjunctive rules plus exceptions, learned by a heuristic search process, and has some of the best fits to human ex- perimental data—particularly for the judgments of individ- ual participants. Parallel motivation for reexamining the role of logical structures in human concept representation comes from evidence that the difficulty of learning a new concept is well predicted by its logical complexity (Feldman, 2000). An Analysis of Concepts A general approach to the Rational Analysis of inductive learn- ing problems has emerged in recent years (Anderson, 1990; Tenenbaum, 1999; Chater and Oaksford, 1999). Under this approach a space of hypotheses is posited, and beliefs are as- signed using Bayesian statistics—a coherent framework that combines data and a priori knowledge to give posterior de- grees of belief. Uses of this approach, for instance in causal induction (Griffiths and Tenenbaum, 2005) and word learn- ing (Xu and Tenenbaum, 2005), have successfully predicted human generalization behavior in a range of tasks. In our case, we wish to establish a hypothesis space of rules, and analyze the behavior of a Rational agent trying to learn those rules from labeled examples. Thus the learn- ing problem is to determine P(F|E, `(E)), where F ranges over rules, E is the set of observed example objects (possibly with repeats) and `(E) are the observed labels. (Through- out this section we consider a single labeled concept, thus `(x) ∈ {0, 1} indicates whether x is an example or a non- example of the concept.) This quantity may be expressed

  • a Rational Analysis of rule based concept learning
    Cognitive Science, 2008
    Co-Authors: Noah D. Goodman, Jacob Feldman, Joshua B. Tenenbaum, Thomas L. Griffiths
    Abstract:

    This article proposes a new model of human concept learning that provides a Rational Analysis of learning feature-based concepts. This model is built upon Bayesian inference for a grammatically structured hypothesis space—a concept language of logical rules. This article compares the model predictions to human generalization judgments in several well-known category learning experiments, and finds good agreement for both average and individual participant generalizations. This article further investigates judgments for a broad set of 7-feature concepts—a more natural setting in several ways—and again finds that the model explains human performance.

Joshua B. Tenenbaum - One of the best experts on this subject based on the ideXlab platform.

  • Compositionality in Rational Analysis: grammar-based induction for concept learning
    2020
    Co-Authors: Noah D. Goodman, Thomas L. Griffiths, Joshua B. Tenenbaum, Jacob Feldman
    Abstract:

    Rational Analysis attempts to explain aspects of human cognition as an adaptive response to the environment (Marr, 1982; Anderson, 1990; Chater, Tenenbaum, & Yuille, 2006). The dominant approach to Rational Analysis today takes an ecologically reasonable specification of a problem facing an organism, given in statistical terms, then seeks an optimal solution, usually using Bayesian methods. This approach has proven very successful in cognitive science; it has predicted perceptual phenomena (Geisler & Kersten, 2002; Feldman, 2001), illuminated puzzling effects in reasoning (Chater & Oaksford, 1999; Griffiths & Tenenbaum, 2006), and, especially, explained how human learning can succeed despite sparse input and endemic uncertainty (Tenenbaum, 1999; Tenenbaum & Griffiths, 2001). However, there were earlier notions of the “RationalAnalysis of cognition that emphasized very different ideas. One of the central ideas behind logical and computational approaches, which previously dominated notions of Rationality, is that meaning can be captured in the structure of representations, but that compositional semantics are needed for these representations to provide a coherent account of thought. In this chapter we attempt to reconcile the modern approach to Rational Analysis with some aspects of this older, logico-computational approach. We do this via a model—offered as an extended example—of human concept learning. In the current chapter we are primarily concerned with formal aspects of this approach; in other work (Goodman, Tenenbaum, Feldman, & Griffiths, in press) we more carefully study a variant of this model as a psychological model of human concept learning. Explaining human cognition was one of the original motivations for the development of formal logic. George Boole, the father of digital logic, developed his symbolic language in order to explicate the Rational laws underlying thought: his principal work, An Investigation of the Laws of Thought (Boole, 1854), was written to “investigate the fundamental laws of those operations of the mind by which reasoning is performed,” and arrived at “some probable intimations concerning the nature and constitution of the human mind” (p. 1). Much of mathematical logic since Boole can be regarded as an attempt to capture the coherence of thought in a formal system. This is particularly apparent in the work, by Frege (1892), Tarski (1956) and others, on model-theoretic semantics for logic, which aimed to create formal systems both flexible and systematic enough to capture the complexities of mathematical thought. A central component in this program is compositionality. Consider Frege’s Principle1: each syntactic operation of a formal language should have a corresponding semantic operation. This principle requires syntactic compositionality, that meaningful terms in a formal system are built up by combination operations, as well as compatibility between the syntax and semantics of the system. When Turing, Church, and others suggested that formal systems could be manipulated by mechanical computers it was natural (at least in hindsight) to suggest that cognition operates in a similar way: meaning is manipulated in the mind by computation2. Viewing the mind as a formal computational system in this way suggests that compositionality should also be found in the mind; that is, that mental representations may be combined into new representations, and the meaning of mental representations may be decomposed in terms of the meaning of their components. Two important virtues for a theory of thought result (Fodor, 1975): productivity—the number of representations is unbounded because they may be boundlessly combined— and systematicity—the combination of two representations is meaningful to one who can understand each separately. Despite its importance to the computational theory of mind, compositionality has seldom been captured by modern Rational analyses. Yet there are a number of reasons to desire a compositional Rational Analysis. For instance, productivity of mental representations would provide an explanation of the otherwise puzzling ability of human thought to adapt to novel situations populated by new concepts—even those far beyond the ecological pressures of our evolutionary milieu (such as radiator repairs and the use of fiberglass bottom powerboats). We will show in this chapter that Bayesian statistical methods can be fruitfully combined with compositional representational systems by developing such a model in the well-studied setting of concept learning. This addresses a long running tension in the literature on human concepts: similarity-based statistical learning models have provided a good understanding of how simple concepts can be learned (Medin & Schaffer, 1978; Anderson, 1991; Kruschke, 1992;

  • A Rational Analysis of Rule-based Concept Learning
    2017
    Co-Authors: Noah D. Goodman, Thomas L. Griffiths, Jacob Feldman, Joshua B. Tenenbaum
    Abstract:

    A Rational Analysis of Rule-based Concept Learning Noah D. Goodman 1 (ndg@mit.edu), Thomas Griffiths 2 (tom griffiths@berkeley.edu), Jacob Feldman 3 (jacob@ruccs.rutgers.edu), Joshua B. Tenenbaum 1 (jbt@mit.edu) 1 Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology 2 Department of Psychology, University of California, Berkeley 3 Department of Psychology, Center for Cognitive Science, Rutgers University Abstract However, existing rule-based models are primarily heuristic—no Rational Analysis has been provided, and they have not been tied to statistical approaches to induction. A ra- tional Analysis for rule-based models might assume that con- cepts are (represented as) rules, and ask what degree of be- lief a Rational agent should accord to each rule, given some observed examples. We answer this question by formulat- ing the hypothesis space of rules as words in a “concept lan- guage” generated by a context-free grammar. Considering the probability of productions in this grammar leads to a prior probability for words in the language, and the logical form of these words motivates an expression for the probability of observed examples given a rule. The methods of Bayesian Analysis then lead to the Rational Rules model of concept learning. This grammatical approach to induction has ben- efits for Bayesian Rational Analysis: it compactly specifies an infinite, and flexible, hypothesis space of structured rules and a prior that decreases with complexity. The Rational Rules model thus makes contributions to both rule-based concept modeling and Rational statistical learning models: to the for- mer it provides a Rational Analysis, and to the latter it provides the grammar-based approach. Across a range of experimen- tal tasks, this new model achieves comparable fits to the best rule-based models in the literature, but with fewer free pa- rameters and arbitrary processing assumptions. We propose a new model of human concept learning that pro- vides a Rational Analysis for learning of feature-based concepts. This model is built upon Bayesian inference for a grammat- ically structured hypothesis space—a “concept language” of logical rules. We compare the model predictions to human generalization judgments in two well-known category learning experiments, and find good agreement for both average and individual participants’ generalizations. Keywords: concept learning; categorization; Bayesian induc- tion; probabilistic grammar; rules. Introduction Concepts are a topic of perennial interest to psychology, par- ticularly concepts which identify kinds of things. Such con- cepts are mental representations which enable one to discrim- inate between objects that satisfy the concept and those which do not. Given their discriminative use, a natural hypothesis is that concepts are simply rules for classifying objects based on features. Indeed, the “classical” theory of concepts (see Smith and Medin, 1981) takes this viewpoint, suggesting that a concept can be expressed as a simple feature-based rule: a conjunction of features that are necessary and jointly suffi- cient for membership. Early models based on this approach failed to account for many aspects of human categorization behavior, especially the graded use of concepts (Mervis and Rosch, 1981). Attention consequently turned to models with a more statistical nature: similarity to prototypes or to exem- plars (Medin and Schaffer, 1978; Kruschke, 1992; Love et al., 2004). The statistical nature of many of these models has made them amenable to a Rational Analysis (Anderson, 1990), which attempts to explain why people do what they do, com- plementing (often apparently ad-hoc) process-level accounts. Despite the success of similarity-based models, recently re- newed interest has led to more sophisticated rule-based mod- els. Among the reasons for this reconsideration are the inabil- ity of similarity-based models to provide a method for con- cept combination, common reports by participants that they “feel as if” they are using a rule, and the unrealistic mem- ory demands of most similarity-based models. The RULEX model (Nosofsky et al., 1994), for instance, treats concepts as conjunctive rules plus exceptions, learned by a heuristic search process, and has some of the best fits to human ex- perimental data—particularly for the judgments of individ- ual participants. Parallel motivation for reexamining the role of logical structures in human concept representation comes from evidence that the difficulty of learning a new concept is well predicted by its logical complexity (Feldman, 2000). An Analysis of Concepts A general approach to the Rational Analysis of inductive learn- ing problems has emerged in recent years (Anderson, 1990; Tenenbaum, 1999; Chater and Oaksford, 1999). Under this approach a space of hypotheses is posited, and beliefs are as- signed using Bayesian statistics—a coherent framework that combines data and a priori knowledge to give posterior de- grees of belief. Uses of this approach, for instance in causal induction (Griffiths and Tenenbaum, 2005) and word learn- ing (Xu and Tenenbaum, 2005), have successfully predicted human generalization behavior in a range of tasks. In our case, we wish to establish a hypothesis space of rules, and analyze the behavior of a Rational agent trying to learn those rules from labeled examples. Thus the learn- ing problem is to determine P(F|E, `(E)), where F ranges over rules, E is the set of observed example objects (possibly with repeats) and `(E) are the observed labels. (Through- out this section we consider a single labeled concept, thus `(x) ∈ {0, 1} indicates whether x is an example or a non- example of the concept.) This quantity may be expressed

  • a Rational Analysis of rule based concept learning
    Cognitive Science, 2008
    Co-Authors: Noah D. Goodman, Jacob Feldman, Joshua B. Tenenbaum, Thomas L. Griffiths
    Abstract:

    This article proposes a new model of human concept learning that provides a Rational Analysis of learning feature-based concepts. This model is built upon Bayesian inference for a grammatically structured hypothesis space—a concept language of logical rules. This article compares the model predictions to human generalization judgments in several well-known category learning experiments, and finds good agreement for both average and individual participant generalizations. This article further investigates judgments for a broad set of 7-feature concepts—a more natural setting in several ways—and again finds that the model explains human performance.

Mike Oaksford - One of the best experts on this subject based on the ideXlab platform.

  • Rationality, Rational Analysis and human reasoning
    2020
    Co-Authors: N. Chater, Mike Oaksford
    Abstract:

    Book synopsis: This collection brings together a set of specially commissioned chapters from leading international researchers in the psychology of reasoning. Its purpose is to explore the historical, philosophical and theoretical implications of the development of this field. Taking the unusual approach of engaging not only with empirical data but also with the ideas and concepts underpinning the psychology of reasoning, this volume has important implications both for psychologists and other students of cognition, including philosophers. Sub-fields covered include mental logic, mental models, Rational Analysis, social judgement theory, game theory and evolutionary theory. There are also specific chapters dedicated to the history of syllogistic reasoning, the psychology of reasoning as it operates in scientific theory and practice, Brunswickian approaches to reasoning and task environments, and the implications of Popper's philosophy for models of behaviour testing. This cross-disciplinary dialogue and the range of material covered makes this an invaluable reference for students and researchers into the psychology and philosophy of reasoning.

  • adaptive non interventional heuristics for covariation detection in causal induction model comparison and Rational Analysis
    Cognitive Science, 2007
    Co-Authors: Masasi Hattori, Mike Oaksford
    Abstract:

    In this article, 41 models of covariation detection from 2 × 2 contingency tables were evaluated against past data in the literature and against data from new experiments. A new model was also included based on a limiting case of the normative phi-coefficient under an extreme rarity assumption, which has been shown to be an important factor in covariation detection (McKenzie & Mikkelsen, 2007) and data selection (Hattori, 2002; Oaksford & Chater, 1994, 2003). The results were supportive of the new model. To investigate its explanatory adequacy, a Rational Analysis using two computer simulations was conducted. These simulations revealed the environmental conditions and the memory restrictions under which the new model best approximates the normative model of covariation detection in these tasks. They thus demonstrated the adaptive Rationality of the new model.

  • The Rational Analysis of human cognition
    2002
    Co-Authors: N. Chater, Mike Oaksford
    Abstract:

    Book synopsis: Reason and Nature investigates the normative dimension of reason and Rationality and how it can be situated within the natural world. Nine philosophers and two psychologists address three main themes. The first concerns the status of norms of Rationality and, in particular, how it is possible to show that norms we take to be objectively authoritative are so in fact. The second has to do with the precise form taken by the norms of Rationality. The third concerns the role of norms of Rationality in the psychological explanation of belief and action. It is widely assumed that we use the normative principles of Rationality as regulative principles governing psychological explanation. This seems to demand that there is a certain harmony between the norms of Rationality and the psychology of reasoning. What, then, should we make of the well-documented evidence suggesting that people consistently fail to reason well? And how can we extend the model to non-language-using creatures? As this collection testifies, current work in the theory of Rationality is subject to very diverse influences ranging from experimental and theoretical psychology, through philosophy of logic and language, to metaethics and the theory of practical reasoning. This work is pursued in various philosophical styles and with various orientations. Straight-down-the-line analytical, and largely a priori, enquiry contrasts with empirically constrained theorizing. A focus on human Rationality contrasts with a focus on Rationality in the wider natural world. As things stand work in one style often proceeds in isolation from work in others. If progress is to be made on Rationality theorists will need to range widely. Reason and Nature will provide a stimulus to that endeavour.

  • The Rational Analysis of mind and behavior
    Synthese, 2000
    Co-Authors: N. Chater, Mike Oaksford
    Abstract:

    Rational Analysis (Anderson 1990, 1991a) is an empiricalprogram of attempting to explain why the cognitive system isadaptive, with respect to its goals and the structure of itsenvironment. We argue that Rational Analysis has two importantimplications for philosophical debate concerning Rationality. First,Rational Analysis provides a model for the relationship betweenformal principles of Rationality (such as probability or decisiontheory) and everyday Rationality, in the sense of successfulthought and action in daily life. Second, applying the program ofRational Analysis to research on human reasoning leads to a radicalreinterpretation of empirical results which are typically viewed asdemonstrating human irRationality.

  • ten years of the Rational Analysis of cognition
    Trends in Cognitive Sciences, 1999
    Co-Authors: N. Chater, Mike Oaksford
    Abstract:

    Rational Analysis is an empirical program that attempts to explain the function and purpose of cognitive processes. This article looks back on a decade of research outlining the Rational Analysis methodology and how the approach relates to other work in cognitive science. We illustrate Rational Analysis by considering how it has been applied to memory and reasoning. From the perspective of traditional cognitive science, the cognitive system can appear to be a rather arbitrary assortment of mechanisms with equally arbitrary limitations. In contrast, Rational Analysis views cognition as intricately adapted to its environment and to the problems it faces.