The Experts below are selected from a list of 276 Experts worldwide ranked by ideXlab platform
Andris Ambainis - One of the best experts on this subject based on the ideXlab platform.
-
Probabilistic Inductive Inference: a survey
Theoretical Computer Science, 2001Co-Authors: Andris AmbainisAbstract:This paper surveys developments in probabilistic Inductive Inference (learning) of recursive (computable) functions. We mainly focus on finite learning, since this simple paradigm has produced the most interesting (and most complex) results.
-
Inductive Inference with procrastination back to definitions
Fundamenta Informaticae, 1999Co-Authors: Andris Ambainis, R Umarc, Siņ Scaron Freivalds, Carl SmithAbstract:In this paper, we reconsider the definition of procrastinating learning machines. In the original definition of Freivalds and Smith [FS93], constructive ordinals are used to bound mindchanges. We investigate possibility of using arbitrary linearly ordered sets to bound mindchanges in similar way. It turns out that using certain ordered sets it is possible to define Inductive Inference types different from the previously known ones. We investigate properties of the new Inductive Inference types and compare them to other types.
-
probabilistic Inductive Inference a survey
arXiv: Learning, 1999Co-Authors: Andris AmbainisAbstract:Inductive Inference is a recursion-theoretic theory of learning, first developed by E. M. Gold (1967). This paper surveys developments in probabilistic Inductive Inference. We mainly focus on finite Inference of recursive functions, since this simple paradigm has produced the most interesting (and most complex) results.
-
STACS - General Inductive Inference Types Based on Linearly-Ordered Sets
STACS 96, 1996Co-Authors: Andris Ambainis, Rusins Freivalds, Carl SmithAbstract:In this paper, we reconsider the definitions of procrastinating learning machines. In the original definition of Freivalds and Smith [FS93], constructive ordinals are used to bound mindchanges. We investigate the possibility of using arbitrary linearly ordered sets to bound mindchanges in a similar way. It turns out that using certain ordered sets it is possible to define Inductive Inference types more general than the previously known ones. We investigate properties of the new Inductive Inference types and compare them to other types.
-
EuroCOLT - The power of procrastination in Inductive Inference: How it depends on used ordinal notations
Lecture Notes in Computer Science, 1995Co-Authors: Andris AmbainisAbstract:We consider Inductive Inference with procrastination. Usually it is defined using constructive ordinals. For constructive ordinals there exist many different systems of notations. In this paper we study how the power of Inductive Inference depends on used system of notations.
Rolf Wiehagen - One of the best experts on this subject based on the ideXlab platform.
-
GOSLER Final Report - Error Detecting in Inductive Inference
Algorithmic Learning for Knowledge-Based Systems, 1995Co-Authors: Rusins Freivalds, Efim Kinber, Rolf WiehagenAbstract:Several well-known Inductive Inference strategies change the actual hypothesis only when they discover that it “provably misclassifies” an example seen so far. This notion is made mathematically precise and its general power is characterized. In spite of its strength it is shown that this approach is not of universal power. Consequently, then hypotheses are considered which “unprovably misclassify” examples and the properties of this approach are studied. Among others it turns out that this type is of the same power as monotonic identification. Then it is shown that universal power can be achieved only when an unbounded number of alternations of these dual types of hypotheses is allowed. Finally, a universal method is presented enabling an Inductive Inference strategy to verify the incorrectness of any of its incorrect intermediate hypotheses.
-
How Inductive Inference strategies discover their errors
Information & Computation, 1995Co-Authors: Rūsiņš Freivalds, Efim Kinber, Rolf WiehagenAbstract:Abstract Several well-known Inductive Inference strategies change the actual hypothesis only when they discover that it "provably misclassifies" an example seen so far. This notion is made mathematically precise, and its general power is characterized. In spite of its strength, it is shown that this approach is not of universal power. Consequently, hypotheses are considered which "unprovably misclassify" examples, and the properties of this approach are studied. Among others, it turns out that this type is of the same power as monotonic identification. Then it is shown that universal power can be achieved only when an unbounded number of alternations of these dual types of hypotheses is allowed. Finally, a universal method is presented, enabling an Inductive Inference strategy to verify the incorrectness of any of its incorrect intermediate hypotheses.
-
From Inductive Inference to algorithmic learning theory
New Generation Computing, 1994Co-Authors: Rolf WiehagenAbstract:We present two phenomena which were discovered in pure recursion-theoretic Inductive Inference, namely inconsistent learning (learing strategies producing apparently “senseless” hypotheses can solve problems unsolvable by “reasonable” learning strategies) and learning from good examples (“much less” information can lead to much more learning power). Recently, it has been shown that these phenomena also hold in the world of polynomial-time algorithmic learning. Thus Inductive Inference can be understood and used as a source of potent ideas guiding both research and applications in algorithmic learning theory.
-
ALT - From Inductive Inference to Algorithmic Learning Theory
Lecture Notes in Computer Science, 1993Co-Authors: Rolf WiehagenAbstract:We present two phenomena which were discovered in pure recursion-theoretic Inductive Inference, namely inconsistent learning (learning strategies producing apparently “senseless” hypotheses can solve problems unsolvable by “reasonable” learning strategies) and learning from good examples (“much less” information can lead to much more learning power). Recently, it has been shown that these phenomena also hold in the world of polynomial-time algorithmic learning. Thus Inductive Inference can be understood and used as a source of potent ideas guiding both research and applications in algorithmic learning theory.
-
a thesis in Inductive Inference
Proceedings of the 1st International Workshop on Nonmonotonic and Inductive Logic, 1990Co-Authors: Rolf WiehagenAbstract:Inductive Inference is the theory of identifying recursive functions from examples. In [26], [27], [30] the following thesis was stated: Any class of recursive functions which is identifiable at all can always be identified by an enumeratively working strategy. Moreover, the identification can always be realized with respect to a suitable nonstandard (i.e. non-Godel) numbering. We review some of the results which have led us to state this thesis. New results are presented concerning monotonic identification and corroborating the thesis. Some of the consequences of the thesis are discussed involving the development of the theory of Inductive Inference during the last decade. Problems of further investigation as well as further applications of non-Godel numberings in Inductive Inference are summarized.
Frank Stephan - One of the best experts on this subject based on the ideXlab platform.
-
STACS - Inductive Inference and Reverse Mathematics
2020Co-Authors: Rupert Holzl, Sanjay Jain, Frank StephanAbstract:The present work investigates Inductive Inference from the perspective of reverse mathematics. Reverse mathematics is a framework which relates the proof strength of theorems and axioms throughout many areas of mathematics in an interdisciplinary way. The present work looks at basic notions of learnability including Angluin's tell-tale condition and its variants for learning in the limit and for conservative learning. Furthermore, the more general criterion of partial learning is investigated. These notions are studied in the reverse mathematics context for uniformly and weakly represented families of languages. The results are stated in terms of axioms referring to domination and induction strength.
-
Inductive Inference and reverse mathematics
Annals of Pure and Applied Logic, 2016Co-Authors: Rupert Holzl, Sanjay Jain, Frank StephanAbstract:Abstract The present work investigates Inductive Inference from the perspective of reverse mathematics. Reverse mathematics is a framework that allows gauging the proof strength of theorems and axioms in many areas of mathematics. The present work applies its methods to basic notions of algorithmic learning theory such as Angluin's tell-tale criterion and its variants for learning in the limit and for conservative learning, as well as to the more general scenario of partial learning. These notions are studied in the reverse mathematics context for uniformly and weakly represented families of languages. The results are stated in terms of axioms referring to induction strength and to domination of weakly represented families of functions.
-
Inductive Inference and reverse mathematics
Symposium on Theoretical Aspects of Computer Science, 2015Co-Authors: Rupert Holzl, Sanjay Jain, Frank StephanAbstract:The present work investigates Inductive Inference from the perspective of reverse mathematics. Reverse mathematics is a framework which relates the proof strength of theorems and axioms throughout many areas of mathematics in an interdisciplinary way. The present work looks at basic notions of learnability including Angluin's tell-tale condition and its variants for learning in the limit and for conservative learning. Furthermore, the more general criterion of partial learning is investigated. These notions are studied in the reverse mathematics context for uniformly and weakly represented families of languages. The results are stated in terms of axioms referring to domination and induction strength.
-
mitotic classes in Inductive Inference
SIAM Journal on Computing, 2008Co-Authors: Sanjay Jain, Frank StephanAbstract:For the natural notion of splitting classes into two disjoint subclasses via a recursive classifier working on texts, the question of how these splittings can look in the case of learnable classes is addressed. Here the strength of the classes is compared using the strong and weak reducibility from intrinsic complexity. It is shown that, for explanatorily learnable classes, the complete classes are also mitotic with respect to weak and strong reducibility, respectively. But there is a weakly complete class that cannot be split into two classes which are of the same complexity with respect to strong reducibility. It is shown that, for complete classes for behaviorally correct learning, one-half of each splitting is complete for this learning notion as well. Furthermore, it is shown that explanatorily learnable and recursively enumerable classes always have a splitting into two incomparable classes; this gives an Inductive Inference counterpart of the Sacks splitting theorem from recursion theory.
M.a. Fulk - One of the best experts on this subject based on the ideXlab platform.
-
Robust separations in Inductive Inference
Journal of Symbolic Logic, 2020Co-Authors: M.a. FulkAbstract:AbstractResults in recursion-theoretic Inductive Inference have been criticized as depending on unrealistic self-referential examples. J. M. Bārzdiņš proposed a way of ruling out such examples, and conjectured that one of the earliest results of Inductive Inference theory would fall if his method were used. In this paper we refute Bārzdiņš' conjecture.We propose a new line of research examining robust separations; these are defined using a strengthening of Bārzdiņš' original idea. The preliminary results of the new line of research are presented, and the most important open problem is stated as a conjecture. Finally, we discuss the extension of this work from function learning to formal language learning.
-
FOCS - Robust separations in Inductive Inference
Proceedings [1990] 31st Annual Symposium on Foundations of Computer Science, 1990Co-Authors: M.a. FulkAbstract:Results in recursion-theoretic Inductive Inference have been criticized as depending on unrealistic self-referential examples. J.M. Barzdin (1974) proposed a way of ruling out such examples and conjectured that one of the earliest results of Inductive Inference theory would fall if his method were used. The author refutes Barzdin's conjecture and proposes a new line of research examining robust separations which are defined using a strengthening of Barzdin's original idea. Preliminary results are presented, and the most important open problem is stated as a conjecture. The extension of this work from function learning to formal language learning is discussed. >
-
Robust separations in Inductive Inference
Proceedings [1990] 31st Annual Symposium on Foundations of Computer Science, 1990Co-Authors: M.a. FulkAbstract:Results in recursion-theoretic Inductive Inference have been criticized as depending on unrealistic self-referential examples. J.M. Barzdin (1974) proposed a way of ruling out such examples and conjectured that one of the earliest results of Inductive Inference theory would fall if his method were used. The author refutes Barzdin's conjecture and proposes a new line of research examining robust separations which are defined using a strengthening of Barzdin's original idea. Preliminary results are presented, and the most important open problem is stated as a conjecture. The extension of this work from function learning to formal language learning is discussed.
Raymond J. Dolan - One of the best experts on this subject based on the ideXlab platform.
-
Anatomical Segregation of Component Processes in an Inductive Inference Task
Journal of Cognitive Neuroscience, 2000Co-Authors: Vinod Goel, Raymond J. DolanAbstract:Inductive Inference underlies much of human cognition. The essential component of induction is hypothesis selection based on some criterion of relevance. The purpose of this study was to determine the neural substrate of Inductive Inference, particularly hypothesis selection, using fMRI. Ten volunteers were shown stimuli consisting of novel animals under two task conditions, and asked to judge whether all the animals in the set were the same type of animal. In one condition, subjects were given a rule that specified the criteria for “same type of animal.” In the other condition, subjects had to infer the rule without instruction. The two conditions were further factored into easy and difficult components. Rule Inference was specifically associated with bilateral hippocampal activation while the task by difficulty interaction was associated with activation in right lateral orbital prefrontal cortex. We interpret the former in terms of semantic encoding of novel stimuli, and the latter in terms of hypothesis selection. Thus, we show an anatomical dissociation between task implementation and task difficulty that may correspond to a critical psychological distinction in the processes necessary for Inductive Inference.