Solomonoff

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 1131 Experts worldwide ranked by ideXlab platform

Marcus Hutter - One of the best experts on this subject based on the ideXlab platform.

  • A Gentle Introduction to Quantum Computing Algorithms with Applications to Universal Prediction.
    arXiv: Quantum Physics, 2020
    Co-Authors: Elliot Catt, Marcus Hutter
    Abstract:

    In this technical report we give an elementary introduction to Quantum Computing for non-physicists. In this introduction we describe in detail some of the foundational Quantum Algorithms including: the Deutsch-Jozsa Algorithm, Shor's Algorithm, Grocer Search, and Quantum Counting Algorithm and briefly the Harrow-Lloyd Algorithm. Additionally we give an introduction to Solomonoff Induction, a theoretically optimal method for prediction. We then attempt to use Quantum computing to find better algorithms for the approximation of Solomonoff Induction. This is done by using techniques from other Quantum computing algorithms to achieve a speedup in computing the speed prior, which is an approximation of Solomonoff's prior, a key part of Solomonoff Induction. The major limiting factors are that the probabilities being computed are often so small that without a sufficient (often large) amount of trials, the error may be larger than the result. If a substantial speedup in the computation of an approximation of Solomonoff Induction can be achieved through quantum computing, then this can be applied to the field of intelligent agents as a key part of an approximation of the agent AIXI.

  • On the computability of Solomonoff induction and AIXI
    Theoretical Computer Science, 2018
    Co-Authors: Jan Leike, Marcus Hutter
    Abstract:

    Abstract How could we solve the machine learning and the artificial intelligence problem if we had infinite computation? Solomonoff induction and the reinforcement learning agent AIXI are proposed answers to this question. Both are known to be incomputable. We quantify this using the arithmetical hierarchy, and prove upper and in most cases corresponding lower bounds for incomputability. Moreover, we show that AIXI is not limit computable, thus it cannot be approximated using finite computation. However there are limit computable e -optimal approximations to AIXI. We also derive computability bounds for knowledge-seeking agents, and give a limit computable weakly asymptotically optimal reinforcement learning agent.

  • On the Computability of AIXI
    2016
    Co-Authors: Marcus Hutter
    Abstract:

    How could we solve the machine learning and the artificial intelligence problem if we had in-finite computation? Solomonoff induction and the reinforcement learning agent AIXI are pro-posed answers to this question. Both are known to be incomputable. In this paper, we quantify this using the arithmetical hierarchy, and prove upper and corresponding lower bounds for in-computability. We show that AIXI is not limit computable, thus it cannot be approximated us-ing finite computation. Our main result is a limit-computable ε-optimal version of AIXI with infi-nite horizon that maximizes expected rewards

  • ALT - Solomonoff Induction Violates Nicod's Criterion
    Lecture Notes in Computer Science, 2015
    Co-Authors: Jan Leike, Marcus Hutter
    Abstract:

    Nicod's criterion states that observing a black raven is evidence for the hypothesis H that all ravens are black. We show that Solomonoff induction does not satisfy Nicod's criterion: there are time steps in which observing black ravens decreases the belief in H. Moreover, while observing any computable infinite string compatible with H, the belief in H decreases infinitely often when using the unnormalized Solomonoff prior, but only finitely often when using the normalized Solomonoff prior. We argue that the fault is not with Solomonoff induction; instead we should reject Nicod's criterion.

  • ALT - On the Computability of Solomonoff Induction and Knowledge-Seeking
    Lecture Notes in Computer Science, 2015
    Co-Authors: Jan Leike, Marcus Hutter
    Abstract:

    Solomonoff induction is held as a gold standard for learning, but it is known to be incomputable. We quantify its incomputability by placing various flavors of Solomonoff's prior M in the arithmetical hierarchy. We also derive computability bounds for knowledge-seeking agents, and give a limit-computable weakly asymptotically optimal reinforcement learning agent.

Jan Leike - One of the best experts on this subject based on the ideXlab platform.

  • On the computability of Solomonoff induction and AIXI
    Theoretical Computer Science, 2018
    Co-Authors: Jan Leike, Marcus Hutter
    Abstract:

    Abstract How could we solve the machine learning and the artificial intelligence problem if we had infinite computation? Solomonoff induction and the reinforcement learning agent AIXI are proposed answers to this question. Both are known to be incomputable. We quantify this using the arithmetical hierarchy, and prove upper and in most cases corresponding lower bounds for incomputability. Moreover, we show that AIXI is not limit computable, thus it cannot be approximated using finite computation. However there are limit computable e -optimal approximations to AIXI. We also derive computability bounds for knowledge-seeking agents, and give a limit computable weakly asymptotically optimal reinforcement learning agent.

  • ALT - Solomonoff Induction Violates Nicod's Criterion
    Lecture Notes in Computer Science, 2015
    Co-Authors: Jan Leike, Marcus Hutter
    Abstract:

    Nicod's criterion states that observing a black raven is evidence for the hypothesis H that all ravens are black. We show that Solomonoff induction does not satisfy Nicod's criterion: there are time steps in which observing black ravens decreases the belief in H. Moreover, while observing any computable infinite string compatible with H, the belief in H decreases infinitely often when using the unnormalized Solomonoff prior, but only finitely often when using the normalized Solomonoff prior. We argue that the fault is not with Solomonoff induction; instead we should reject Nicod's criterion.

  • ALT - On the Computability of Solomonoff Induction and Knowledge-Seeking
    Lecture Notes in Computer Science, 2015
    Co-Authors: Jan Leike, Marcus Hutter
    Abstract:

    Solomonoff induction is held as a gold standard for learning, but it is known to be incomputable. We quantify its incomputability by placing various flavors of Solomonoff's prior M in the arithmetical hierarchy. We also derive computability bounds for knowledge-seeking agents, and give a limit-computable weakly asymptotically optimal reinforcement learning agent.

  • On the Computability of AIXI
    arXiv: Artificial Intelligence, 2015
    Co-Authors: Jan Leike, Marcus Hutter
    Abstract:

    How could we solve the machine learning and the artificial intelligence problem if we had infinite computation? Solomonoff induction and the reinforcement learning agent AIXI are proposed answers to this question. Both are known to be incomputable. In this paper, we quantify this using the arithmetical hierarchy, and prove upper and corresponding lower bounds for incomputability. We show that AIXI is not limit computable, thus it cannot be approximated using finite computation. Our main result is a limit-computable {\epsilon}-optimal version of AIXI with infinite horizon that maximizes expected rewards.

  • Solomonoff Induction Violates Nicod's Criterion
    arXiv: Learning, 2015
    Co-Authors: Jan Leike, Marcus Hutter
    Abstract:

    Nicod's criterion states that observing a black raven is evidence for the hypothesis H that all ravens are black. We show that Solomonoff induction does not satisfy Nicod's criterion: there are time steps in which observing black ravens decreases the belief in H. Moreover, while observing any computable infinite string compatible with H, the belief in H decreases infinitely often when using the unnormalized Solomonoff prior, but only finitely often when using the normalized Solomonoff prior. We argue that the fault is not with Solomonoff induction; instead we should reject Nicod's criterion.

A. Della Cioppa - One of the best experts on this subject based on the ideXlab platform.

  • parsimony doesn t mean simplicity genetic programming for inductive inference on noisy data
    European Conference on Genetic Programming, 2007
    Co-Authors: Ivanoe De Falco, A. Della Cioppa, D Maisto, Umberto Scafuri, E Tarantino
    Abstract:

    A Genetic Programming algorithm based on Solomonoff's probabilistic induction is designed and used to face an Inductive Inference task, i.e., symbolic regression. To this aim, some test functions are dressed with increasing levels of noise and the algorithm is employed to denoise the resulting function and recover the starting functions. Then, the algorithm is compared against a classical parsimony-based GP. The results shows the superiority of the Solomonoff-based approach.

  • EuroGP - Parsimony doesn't mean simplicity: genetic programming for inductive inference on noisy data
    Lecture Notes in Computer Science, 2007
    Co-Authors: Ivanoe De Falco, A. Della Cioppa, D Maisto, Umberto Scafuri, Ernesto Tarantino
    Abstract:

    A Genetic Programming algorithm based on Solomonoff's probabilistic induction is designed and used to face an Inductive Inference task, i.e., symbolic regression. To this aim, some test functions are dressed with increasing levels of noise and the algorithm is employed to denoise the resulting function and recover the starting functions. Then, the algorithm is compared against a classical parsimony-based GP. The results shows the superiority of the Solomonoff-based approach.

  • Genetic programming for inductive inference of chaotic series
    Lecture Notes in Computer Science, 2006
    Co-Authors: I. De Falco, Alessandra Passaro, A. Della Cioppa, Ernesto Tarantino
    Abstract:

    In the context of inductive inference Solomonoff complexity plays a key role in correctly predicting the behavior of a given phenomenon. Unfortunately, Solomonoff complexity is not algorithmically computable. This paper deals with a Genetic Programming approach to inductive inference of chaotic series, with reference to Solomonoff complexity, that consists in evolving a population of mathematical expressions looking for the 'optimal' one that generates a given series of chaotic data. Validation is performed on the Logistic, the Henon and the Mackey-Glass series. The results show that the method is effective in obtaining the analytical expression of the first two series, and in achieving a very good approximation and forecasting of the Mackey-Glass series.

  • WILF - Genetic programming for inductive inference of chaotic series
    Fuzzy Logic and Applications, 2005
    Co-Authors: I. De Falco, Alessandra Passaro, A. Della Cioppa, Ernesto Tarantino
    Abstract:

    In the context of inductive inference Solomonoff complexity plays a key role in correctly predicting the behavior of a given phenomenon. Unfortunately, Solomonoff complexity is not algorithmically computable. This paper deals with a Genetic Programming approach to inductive inference of chaotic series, with reference to Solomonoff complexity, that consists in evolving a population of mathematical expressions looking for the ‘optimal' one that generates a given series of chaotic data. Validation is performed on the Logistic, the Henon and the Mackey–Glass series. The results show that the method is effective in obtaining the analytical expression of the first two series, and in achieving a very good approximation and forecasting of the Mackey–Glass series.

  • inductive inference of chaotic series by genetic programming a Solomonoff based approach
    ACM Symposium on Applied Computing, 2005
    Co-Authors: I. De Falco, A. Della Cioppa, E Tarantino, Alessandra Passaro
    Abstract:

    A Genetic Programming approach to inductive inference of chaotic series, with reference to Solomonoff complexity, is presented. It consists in evolving a population of mathematical expressions looking for the 'optimal' one that generates a given chaotic data series. Validation is performed on the Logistic, the Henon and the Mackey-Glass series. The method is shown effective in obtaining the analytical expression of the first two series, and in achieving very good results on the third one.

Ernesto Tarantino - One of the best experts on this subject based on the ideXlab platform.

  • EuroGP - Parsimony doesn't mean simplicity: genetic programming for inductive inference on noisy data
    Lecture Notes in Computer Science, 2007
    Co-Authors: Ivanoe De Falco, A. Della Cioppa, D Maisto, Umberto Scafuri, Ernesto Tarantino
    Abstract:

    A Genetic Programming algorithm based on Solomonoff's probabilistic induction is designed and used to face an Inductive Inference task, i.e., symbolic regression. To this aim, some test functions are dressed with increasing levels of noise and the algorithm is employed to denoise the resulting function and recover the starting functions. Then, the algorithm is compared against a classical parsimony-based GP. The results shows the superiority of the Solomonoff-based approach.

  • Genetic programming for inductive inference of chaotic series
    Lecture Notes in Computer Science, 2006
    Co-Authors: I. De Falco, Alessandra Passaro, A. Della Cioppa, Ernesto Tarantino
    Abstract:

    In the context of inductive inference Solomonoff complexity plays a key role in correctly predicting the behavior of a given phenomenon. Unfortunately, Solomonoff complexity is not algorithmically computable. This paper deals with a Genetic Programming approach to inductive inference of chaotic series, with reference to Solomonoff complexity, that consists in evolving a population of mathematical expressions looking for the 'optimal' one that generates a given series of chaotic data. Validation is performed on the Logistic, the Henon and the Mackey-Glass series. The results show that the method is effective in obtaining the analytical expression of the first two series, and in achieving a very good approximation and forecasting of the Mackey-Glass series.

  • WILF - Genetic programming for inductive inference of chaotic series
    Fuzzy Logic and Applications, 2005
    Co-Authors: I. De Falco, Alessandra Passaro, A. Della Cioppa, Ernesto Tarantino
    Abstract:

    In the context of inductive inference Solomonoff complexity plays a key role in correctly predicting the behavior of a given phenomenon. Unfortunately, Solomonoff complexity is not algorithmically computable. This paper deals with a Genetic Programming approach to inductive inference of chaotic series, with reference to Solomonoff complexity, that consists in evolving a population of mathematical expressions looking for the ‘optimal' one that generates a given series of chaotic data. Validation is performed on the Logistic, the Henon and the Mackey–Glass series. The results show that the method is effective in obtaining the analytical expression of the first two series, and in achieving a very good approximation and forecasting of the Mackey–Glass series.

  • SAC - Inductive inference of chaotic series by Genetic Programming: a Solomonoff-based approach
    Proceedings of the 2005 ACM symposium on Applied computing - SAC '05, 2005
    Co-Authors: I. De Falco, Ernesto Tarantino, A. Della Cioppa, Alessandra Passaro
    Abstract:

    A Genetic Programming approach to inductive inference of chaotic series, with reference to Solomonoff complexity, is presented. It consists in evolving a population of mathematical expressions looking for the 'optimal' one that generates a given chaotic data series. Validation is performed on the Logistic, the Henon and the Mackey-Glass series. The method is shown effective in obtaining the analytical expression of the first two series, and in achieving very good results on the third one.

Tor Lattimore - One of the best experts on this subject based on the ideXlab platform.

  • on martin lof convergence of Solomonoff s mixture
    Theory and Applications of Models of Computation, 2013
    Co-Authors: Tor Lattimore, Marcus Hutter
    Abstract:

    We study the convergence of Solomonoff’s universal mixture on individual Martin-Lof random sequences. A new result is presented extending the work of Hutter and Muchnik (2004) by showing that there does not exist a universal mixture that converges on all Martin-Lof random sequences.

  • No free lunch versus Occam's Razor in supervised learning
    Algorithmic Probability and Friends. Bayesian Prediction and Artificial Intelligence, 2013
    Co-Authors: Tor Lattimore, Marcus Hutter
    Abstract:

    The No Free Lunch theorems are often used to argue that domain specific knowledge is required to design successful algorithms. We use algorithmic information theory to argue the case for a universal bias allowing an algorithm to succeed in all interesting problem domains. Additionally, we give a new algorithm for off-line classification, inspired by Solomonoff induction, with good performance on all structured (compressible) problems under reasonable assumptions. This includes a proof of the efficacy of the well-known heuristic of randomly selecting training data in the hope of reducing the misclassification rate.

  • No Free Lunch versus Occam's Razor in Supervised Learning
    arXiv: Learning, 2011
    Co-Authors: Tor Lattimore, Marcus Hutter
    Abstract:

    The No Free Lunch theorems are often used to argue that domain specific knowledge is required to design successful algorithms. We use algorithmic information theory to argue the case for a universal bias allowing an algorithm to succeed in all interesting problem domains. Additionally, we give a new algorithm for off-line classification, inspired by Solomonoff induction, with good performance on all structured problems under reasonable assumptions. This includes a proof of the efficacy of the well-known heuristic of randomly selecting training data in the hope of reducing misclassification rates.

  • Universal Prediction of Selected Bits
    arXiv: Learning, 2011
    Co-Authors: Tor Lattimore, Marcus Hutter, Vaibhav Gavane
    Abstract:

    Many learning tasks can be viewed as sequence prediction problems. For example, online classification can be converted to sequence prediction with the sequence being pairs of input/target data and where the goal is to correctly predict the target data given input data and previous input/target pairs. Solomonoff induction is known to solve the general sequence prediction problem, but only if the entire sequence is sampled from a computable distribution. In the case of classification and discriminative learning though, only the targets need be structured (given the inputs). We show that the normalised version of Solomonoff induction can still be used in this case, and more generally that it can detect any recursive sub-pattern (regularity) within an otherwise completely unstructured sequence. It is also shown that the unnormalised version can fail to predict very simple recursive sub-patterns.

  • ALT - Universal prediction of selected bits
    Lecture Notes in Computer Science, 2011
    Co-Authors: Tor Lattimore, Marcus Hutter, Vaibhav Gavane
    Abstract:

    Many learning tasks can be viewed as sequence prediction problems. For example, online classification can be converted to sequence prediction with the sequence being pairs of input/target data and where the goal is to correctly predict the target data given input data and previous input/target pairs. Solomonoff induction is known to solve the general sequence prediction problem, but only if the entire sequence is sampled from a computable distribution. In the case of classification and discriminative learning though, only the targets need be structured (given the inputs). We show that the normalised version of Solomonoff induction can still be used in this case, and more generally that it can detect any recursive sub-pattern (regularity) within an otherwise completely unstructured sequence. It is also shown that the unnormalised version can fail to predict very simple recursive sub-patterns.