Formal Theory

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 193635 Experts worldwide ranked by ideXlab platform

Jurgen Schmidhuber - One of the best experts on this subject based on the ideXlab platform.

  • maximizing fun by creating data with easily reducible subjective complexity
    Intrinsically Motivated Learning in Natural and Artificial Systems, 2013
    Co-Authors: Jurgen Schmidhuber
    Abstract:

    The Formal Theory of Fun and Creativity (1990–2010) [Schmidhuber, J.: Formal Theory of creativity, fun, and intrinsic motivation (1990–2010). IEEE Trans. Auton. Mental Dev. 2(3), 230–247 (2010b)] describes principles of a curious and creative agent that never stops generating nontrivial & novel & surprising tasks and data. Two modules are needed: a data encoder and a data creator. The former encodes the growing history of sensory data as the agent is interacting with its environment; the latter executes actions shaping the history. Both learn. The encoder continually tries to encode the created data more efficiently, by discovering new regularities in it. Its learning progress is the wow-effect or fun or intrinsic reward of the creator, which maximizes future expected reward, being motivated to invent skills leading to interesting data that the encoder does not yet know but can easily learn with little computational effort. I have argued that this simple Formal principle explains science and art and music and humor.

  • a Formal Theory of creativity to model the creation of art
    2012
    Co-Authors: Jurgen Schmidhuber
    Abstract:

    According to the Formal Theory of Creativity (1990–2010), a creative agent—one that never stops generating non-trivial, novel, and surprising behaviours and data—must have two learning components: a general reward optimiser or reinforcement learner, and an adaptive encoder of the agent’s growing data history (the record of the agent’s interaction with its environment). The learning progress of the encoder is the intrinsic reward for the reward optimiser. That is, the latter is motivated to invent interesting spatio-temporal patterns that the encoder does not yet know but can easily learn to encode better with little computational effort. To maximise expected reward (in the absence of external reward), the reward optimiser will create more and more-complex behaviours that yield temporarily surprising (but eventually boring) patterns that make the encoder quickly improve. I have argued that this simple principle explains science, art, music and humour. It is possible to rigorously Formalise it and implement it on learning machines, thus building artificial robotic scientists and artists equipped with curiosity and creativity. I summarise my work on this topic since 1990, and present a previously unpublished low-complexity artwork computable by a very short program discovered through active search for novel patterns according to the principles of the Theory.

  • Formal Theory of fun and creativity
    European conference on Machine Learning, 2010
    Co-Authors: Jurgen Schmidhuber
    Abstract:

    To build a creative agent that never stops generating non-trivial & novel & surprising data, we need two learning modules: (1) an adaptive predictor or compressor or model of the growing data history as the agent is interacting with its environment, and (2) a general reinforcement learner. The LEARNING PROGRESS of (1) is the FUN or intrinsic reward of (2). That is, (2) is motivated to invent interesting things that (1) does not yet know but can easily learn.

  • Formal Theory of creativity fun and intrinsic motivation 1990 2010
    IEEE Transactions on Autonomous Mental Development, 2010
    Co-Authors: Jurgen Schmidhuber
    Abstract:

    The simple, but general Formal Theory of fun and intrinsic motivation and creativity (1990-2010) is based on the concept of maximizing intrinsic reward for the active creation or discovery of novel, surprising patterns allowing for improved prediction or data compression. It generalizes the traditional field of active learning, and is related to old, but less Formal ideas in aesthetics Theory and developmental psychology. It has been argued that the Theory explains many essential aspects of intelligence including autonomous development, science, art, music, and humor. This overview first describes theoretically optimal (but not necessarily practical) ways of implementing the basic computational principles on exploratory, intrinsically motivated agents or robots, encouraging them to provoke event sequences exhibiting previously unknown, but learnable algorithmic regularities. Emphasis is put on the importance of limited computational resources for online prediction and compression. Discrete and continuous time formulations are given. Previous practical, but nonoptimal implementations (1991, 1995, and 1997-2002) are reviewed, as well as several recent variants by others (2005-2010). A simplified typology addresses current confusion concerning the precise nature of intrinsic motivation.

  • Formal Theory of creativity fun and intrinsic motivation 1990 2010
    IEEE Transactions on Autonomous Mental Development, 2010
    Co-Authors: Jurgen Schmidhuber
    Abstract:

    The simple, but general Formal Theory of fun and intrinsic motivation and creativity (1990-2010) is based on the concept of maximizing intrinsic reward for the active creation or discovery of novel, surprising patterns allowing for improved prediction or data compression. It generalizes the traditional field of active learning, and is related to old, but less Formal ideas in aesthetics Theory and developmental psychology. It has been argued that the Theory explains many essential aspects of intelligence including autonomous development, science, art, music, and humor. This overview first describes theoretically optimal (but not necessarily practical) ways of implementing the basic computational principles on exploratory, intrinsically motivated agents or robots, encouraging them to provoke event sequences exhibiting previously unknown, but learnable algorithmic regularities. Emphasis is put on the importance of limited computational resources for online prediction and compression. Discrete and continuous time formulations are given. Previous practical, but nonoptimal implementations (1991, 1995, and 1997-2002) are reviewed, as well as several recent variants by others (2005-2010). A simplified typology addresses current confusion concerning the precise nature of intrinsic motivation.

Margaret H Kearney - One of the best experts on this subject based on the ideXlab platform.

  • enduring love a grounded Formal Theory of women s experience of domestic violence
    Research in Nursing & Health, 2001
    Co-Authors: Margaret H Kearney
    Abstract:

    Using a grounded Formal Theory approach, 13 qualitative research reports were analyzed with the goal of synthesizing a middle-range Theory of women's responses to violent relationships. The combined sample numbered 282 ethnically and geographically diverse women ages 16-67. Within cultural contexts that normalized relationship violence while promoting idealized romance, these women dealt with the incongruity of violence in their relationships as a basic process of enduring love. In response to shifting definitions of their relationship situations, many women moved through four phases, which began with discounting early violence for the sake of their romantic commitment ("This is what I wanted"), progressed to immobilization and demoralization in the face of increasingly unpredictable violence that was endured by the careful monitoring of partner behavior and the stifling of self ("The more I do, the worse I am"), shifted to a perspective that redefined the situation as unacceptable ("I had enough"), and finally moved out of the relationship and toward a new life ("I was finding me"). Variations in the manifestation and duration of these phases were found to be linked to personal, sociopolitical, and cultural contexts.

  • ready to wear discovering grounded Formal Theory
    Research in Nursing & Health, 1998
    Co-Authors: Margaret H Kearney
    Abstract:

    As qualitative methods have become popular and qualitative reports abundant, researchers have begun to discuss techniques for synthesizing findings about related phenomena from diverse samples. Grounded Formal Theory analysis is one such approach that can yield higher level, broadly applicable Theory from analysis of situation-specific substantive theories. Although grounded Formal theories may lack the cultural detail and contextual tailoring of smaller, more focused “designer” analyses, they have the potential to serve as “ready-to-wear” models that fit experiences of individuals in a variety of settings. © 1998 John Wiley & Sons, Inc. Res Nurs Health 21: 179–186, 1998

Andrew S. Gordon - One of the best experts on this subject based on the ideXlab platform.

  • A Formal Theory of Commonsense Psychology: How People Think People Think
    2017
    Co-Authors: Andrew S. Gordon, Jerry R. Hobbs
    Abstract:

    Commonsense psychology refers to the implicit theories that we all use to make sense of people's behavior in terms of their beliefs, goals, plans, and emotions. These are also the theories we employ when we anthropomorphize complex machines and computers as if they had humanlike mental lives. In order to successfully cooperate and communicate with people, these theories will need to be represented explicitly in future artificial intelligence systems. This book provides a large-scale logical Formalization of commonsense psychology in support of humanlike artificial intelligence. It uses Formal logic to encode the deep lexical semantics of the full breadth of psychological words and phrases, providing fourteen hundred axioms of first-order logic organized into twenty-nine commonsense psychology theories and sixteen background theories. This in-depth exploration of human commonsense reasoning for artificial intelligence researchers, linguists, and cognitive and social psychologists will serve as a foundation for the development of humanlike artificial intelligence.

  • Toward a Large-Scale Formal Theory of Commonsense Psychology for Metacognition
    2014
    Co-Authors: Jerry R. Hobbs, Andrew S. Gordon
    Abstract:

    Robust intelligent systems will require a capacity for metacognitive reasoning, where intelligent systems monitor and reflect on their own reasoning processes. A large-scale study of human strategic reasoning indicates that rich representational models of commonsense psychology are available to enable human metacognition. In this paper, we argue that large-scale Formalizations of commonsense psychology enable metacognitive reasoning in intelligent systems. We describe our progress toward developing 30 integrated axiomatic theories of commonsense psychology, and discuss the central representational challenges that have arisen in this work to date. Commonsense Psychology and Metacognitive Reasonin

  • goals in a Formal Theory of commonsense psychology
    Formal Ontology in Information Systems, 2010
    Co-Authors: Jerry R. Hobbs, Andrew S. Gordon
    Abstract:

    In the context of developing Formal theories of commonsense psychology, or how peole think they think, we have developed a Formal Theory of goals. In it we explicate and axiomatize, among others, the goal-related notions of trying, success, failure, functionality, intactness, and importance.

  • toward a large scale Formal Theory of commonsense psychology for metacognition
    National Conference on Artificial Intelligence, 2006
    Co-Authors: Jerry R. Hobbs, Andrew S. Gordon
    Abstract:

    Abstract : Robust intelligent systems will require a capacity for metacognitive reasoning, where intelligent systems monitor and reflect on their own reasoning processes. A large-scale study of human strategic reasoning indicates that rich representational models of commonsense psychology are available to enable human metacognition. In this paper, we argue that large-scale Formalizations of commonsense psychology enable metacognitive reasoning in intelligent systems. We describe our progress toward developing 30 integrated axiomatic theories of commonsense psychology, and discuss the central representational challenges that have arisen in this work to date.

Jerry R. Hobbs - One of the best experts on this subject based on the ideXlab platform.

  • A Formal Theory of Commonsense Psychology: How People Think People Think
    2017
    Co-Authors: Andrew S. Gordon, Jerry R. Hobbs
    Abstract:

    Commonsense psychology refers to the implicit theories that we all use to make sense of people's behavior in terms of their beliefs, goals, plans, and emotions. These are also the theories we employ when we anthropomorphize complex machines and computers as if they had humanlike mental lives. In order to successfully cooperate and communicate with people, these theories will need to be represented explicitly in future artificial intelligence systems. This book provides a large-scale logical Formalization of commonsense psychology in support of humanlike artificial intelligence. It uses Formal logic to encode the deep lexical semantics of the full breadth of psychological words and phrases, providing fourteen hundred axioms of first-order logic organized into twenty-nine commonsense psychology theories and sixteen background theories. This in-depth exploration of human commonsense reasoning for artificial intelligence researchers, linguists, and cognitive and social psychologists will serve as a foundation for the development of humanlike artificial intelligence.

  • Toward a Large-Scale Formal Theory of Commonsense Psychology for Metacognition
    2014
    Co-Authors: Jerry R. Hobbs, Andrew S. Gordon
    Abstract:

    Robust intelligent systems will require a capacity for metacognitive reasoning, where intelligent systems monitor and reflect on their own reasoning processes. A large-scale study of human strategic reasoning indicates that rich representational models of commonsense psychology are available to enable human metacognition. In this paper, we argue that large-scale Formalizations of commonsense psychology enable metacognitive reasoning in intelligent systems. We describe our progress toward developing 30 integrated axiomatic theories of commonsense psychology, and discuss the central representational challenges that have arisen in this work to date. Commonsense Psychology and Metacognitive Reasonin

  • goals in a Formal Theory of commonsense psychology
    Formal Ontology in Information Systems, 2010
    Co-Authors: Jerry R. Hobbs, Andrew S. Gordon
    Abstract:

    In the context of developing Formal theories of commonsense psychology, or how peole think they think, we have developed a Formal Theory of goals. In it we explicate and axiomatize, among others, the goal-related notions of trying, success, failure, functionality, intactness, and importance.

  • toward a large scale Formal Theory of commonsense psychology for metacognition
    National Conference on Artificial Intelligence, 2006
    Co-Authors: Jerry R. Hobbs, Andrew S. Gordon
    Abstract:

    Abstract : Robust intelligent systems will require a capacity for metacognitive reasoning, where intelligent systems monitor and reflect on their own reasoning processes. A large-scale study of human strategic reasoning indicates that rich representational models of commonsense psychology are available to enable human metacognition. In this paper, we argue that large-scale Formalizations of commonsense psychology enable metacognitive reasoning in intelligent systems. We describe our progress toward developing 30 integrated axiomatic theories of commonsense psychology, and discuss the central representational challenges that have arisen in this work to date.

Bennett G Galef - One of the best experts on this subject based on the ideXlab platform.

  • chapter 4 strategies for social learning testing predictions from Formal Theory
    Advances in The Study of Behavior, 2009
    Co-Authors: Bennett G Galef
    Abstract:

    Abstract Individual learning provides accurate information about the state of the environment, but is costly to acquire. Social learning, although potentially less reliable than individual learning, is also cheaper. This tradeoff between accuracy and cost has been used to construct Formal models that predict when animals should increase reliance on social learning and from whom they should learn. We used Norway rats' social learning about foods as an empirical system to test predictions from Formal Theory concerning (a) the conditions under which animals should increase reliance on social learning (“when strategies”) and (b) from whom they should learn (“who strategies”). We found empirical support for four of five predictions from “when strategies,” but marginal support for only one of seven predictions from “who strategies.” We discuss possible reasons why “when strategies” were more successful than “who strategies” in predicting rats' reliance on social learning when selecting foods.

  • strategies for social learning testing predictions from Formal Theory
    2009
    Co-Authors: Bennett G Galef
    Abstract:

    Although the past two decades have seen a significant increase in both theoretical analyses and experimental studies of social learning in animals, there has been relatively little productive interchange between theoreticians and empirical investigators; Formal models have had relatively little impact on the experiments undertaken by empiricists, and experimental data have played little role in the construction of models. As Laland (2004, p. 12) has suggested, ‘‘There is a need for empirical research explicitly evaluating the strategies proposed by theoretical models.’’ Animals can acquire adaptive information either directly, as a result of their own personal experience of the consequences of engaging in alternative behaviors, or indirectly, using various aspects of the behavior of others to guide development of their own behavioral repertoires. Personal sampling results in acquisition of accurate, current information about the environment. However, the errors that are an inescapable part of individual, trial‐and‐error learning can be costly, and personal exploration of the environment not only requires time and energy but also increases exposure to both predation and other environmental threats. Social learning has the potential to reduce costs of individual learning. However, in environments that change over time, social learning is likely to be less reliable than individual learning because, in changing environments, social information can be outdated. Similarly, in environments that vary spatially, there is a risk that potential models are engaged in behavior more suited to environmental conditions other than those facing a potential social learner. Relative reliance on social and individual learning can thus be