Kolmogorov Complexity

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 5160 Experts worldwide ranked by ideXlab platform

Wolfgang Merkle - One of the best experts on this subject based on the ideXlab platform.

  • Kolmogorov Complexity and Applications - 06051 Abstracts Collection -- Kolmogorov Complexity and Applications.
    2020
    Co-Authors: Marcus Hutter, Wolfgang Merkle, Paul M B Vitanyi
    Abstract:

    From 29.01.06 to 03.02.06, the Dagstuhl Seminar 06051 ``Kolmogorov Complexity and Applications'' was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available.

  • Kolmogorov Complexity and the recursion theorem
    Transactions of the American Mathematical Society, 2011
    Co-Authors: Bjorn Kjoshanssen, Wolfgang Merkle, Frank Stephan
    Abstract:

    Several classes of diagonally nonrecursive (DNR) functions are characterized in terms of Kolmogorov Complexity. In particular, a set of natural numbers A can wtt-compute a DNR function iff there is a nontrivial recursive lower bound on the Kolmogorov Complexity of the initial segments of A. Furthermore, A can Turing compute a DNR function iff there is a nontrivial A-recursive lower bound on the Kolmogorov Complexity of the initial segments of A. A is PA-complete, that is, A can compute a {0, 1}-valued DNR function, iff A can compute a function F such that F(n) is a string of length n and maximal C-Complexity among the strings of length n. A ≥ T K iff A can compute a function F such that F(n) is a string of length n and maximal H-Complexity among the strings of length n. Further characterizations for these classes are given. The existence of a DNR function in a Turing degree is equivalent to the failure of the Recursion Theorem for this degree; thus the provided results characterize those Turing degrees in terms of Kolmogorov Complexity which no longer permit the usage of the Recursion Theorem.

  • reconciling data compression and Kolmogorov Complexity
    International Colloquium on Automata Languages and Programming, 2007
    Co-Authors: Laurent Bienvenu, Wolfgang Merkle
    Abstract:

    While data compression andKolmogorov Complexity are both about effective coding of words, the two settings differ in the following respect. A compression algorithm or compressor, for short, has to map a word to a unique code for this word in one shot, whereas with the standard notions of Kolmogorov Complexity a word has many different codes and the minimum code for a given word cannot be found effectively. This gap is bridged by introducing decidable Turing machines and a corresponding notion of Kolmogorov Complexity, where compressors and suitably normalized decidable machines are essentially the same concept. Kolmogorov Complexity defined via decidable machines yields characterizations in terms of the intial segment Complexity of sequences of the concepts of Martin-Lof randomness, Schnorr randomness, Kurtz randomness, and computable dimension. These results can also be reformulated in terms of time-bounded Kolmogorov Complexity. Other applications of decidable machines are presented, such as a simplified proof of the Miller-Yu theorem (characterizing Martin-Lof randomness by the plain Complexity of the initial segments) and a new characterization of computably traceable sequences via a natural lowness notion for decidable machines.

  • ICALP - Reconciling data compression and Kolmogorov Complexity
    Automata Languages and Programming, 2007
    Co-Authors: Laurent Bienvenu, Wolfgang Merkle
    Abstract:

    While data compression andKolmogorov Complexity are both about effective coding of words, the two settings differ in the following respect. A compression algorithm or compressor, for short, has to map a word to a unique code for this word in one shot, whereas with the standard notions of Kolmogorov Complexity a word has many different codes and the minimum code for a given word cannot be found effectively. This gap is bridged by introducing decidable Turing machines and a corresponding notion of Kolmogorov Complexity, where compressors and suitably normalized decidable machines are essentially the same concept. Kolmogorov Complexity defined via decidable machines yields characterizations in terms of the intial segment Complexity of sequences of the concepts of Martin-Lof randomness, Schnorr randomness, Kurtz randomness, and computable dimension. These results can also be reformulated in terms of time-bounded Kolmogorov Complexity. Other applications of decidable machines are presented, such as a simplified proof of the Miller-Yu theorem (characterizing Martin-Lof randomness by the plain Complexity of the initial segments) and a new characterization of computably traceable sequences via a natural lowness notion for decidable machines.

  • STACS - Kolmogorov Complexity and the recursion theorem
    STACS 2006, 2006
    Co-Authors: Bjørn Kjos-hanssen, Wolfgang Merkle, Frank Stephan
    Abstract:

    We introduce the concepts of complex and autocomplex sets, where a set A is complex if there is a recursive, nondecreasing and unbounded lower bound on the Kolmogorov Complexity of the prefixes (of the characteristic sequence) of A, and autocomplex is defined likewise with recursive replaced by A-recursive. We observe that exactly the autocomplex sets allow to compute words of given Kolmogorov Complexity and demonstrate that a set computes a diagonally nonrecursive (DNR) function if and only if the set is autocomplex. The class of sets that compute DNR functions is intensively studied in recursion theory and is known to coincide with the class of sets that compute fixed-point free functions. Consequently, the Recursion Theorem fails relative to a set if and only if the set is autocomplex, that is, we have a characterization of a fundamental concept of theoretical computer science in terms of Kolmogorov Complexity. Moreover, we obtain that recursively enumerable sets are autocomplex if and only if they are complete, which yields an alternate proof of the well-known completeness criterion for recursively enumerable sets in terms of computing DNR functions. All results on autocomplex sets mentioned in the last paragraph extend to complex sets if the oracle computations are restricted to truth-table or weak truth-table computations, for example, a set is complex if and only if it wtt-computes a DNR function. Moreover, we obtain a set that is complex but does not compute a Martin-Lof random set, which gives a partial answer to the open problem whether all sets of positive constructive Hausdorff dimension compute Martin-Lof random sets. Furthermore, the following questions are addressed: Given n, how difficult is it to find a word of length n that (a) has at least prefix-free Kolmogorov Complexity n, (b) has at least plain Kolmogorov Complexity n or (c) has the maximum possible prefix-free Kolmogorov Complexity among all words of length n. All these questions are investigated with respect to the oracles needed to carry out this task and it is shown that (a) is easier than (b) and (b) is easier than (c). In particular, we argue that for plain Kolmogorov Complexity exactly the PA-complete sets compute incompressible words, while the class of sets that compute words of maximum Complexity depends on the choice of the universal Turing machine, whereas for prefix-free Kolmogorov Complexity exactly the complete sets allow to compute words of maximum Complexity.

S. Laplante - One of the best experts on this subject based on the ideXlab platform.

  • Kolmogorov Complexity and combinatorial methods in communication Complexity
    Theoretical Computer Science, 2011
    Co-Authors: Marc Kaplan, S. Laplante
    Abstract:

    We introduce a method based on Kolmogorov Complexity to prove lower bounds on communication Complexity. The intuition behind our technique is close to information theoretic methods. We use Kolmogorov Complexity for three different things: first, to give a general lower bound in terms of Kolmogorov mutual information; second, to prove an alternative to Yao's minmax principle based on Kolmogorov Complexity; and finally, to identify hard inputs. We show that our method implies the rectangle and corruption bounds, known to be closely related to the subdistribution bound. We apply our method to the hidden matching problem, a relation introduced to prove an exponential gap between quantum and classical communication. We then show that our method generalizes the VC dimension and shatter coefficient lower bounds. Finally, we compare one-way communication and simultaneous communication in the case of distributional communication Complexity and improve the previous known result.

  • CiE - Lower bounds using Kolmogorov Complexity
    Logical Approaches to Computational Barriers, 2006
    Co-Authors: S. Laplante
    Abstract:

    In this paper, we survey a few recent applications of Kolmogorov Complexity to lower bounds in several models of computation. We consider KI Complexity of Boolean functions, which gives the Complexity of finding a bit where inputs differ, for pairs of inputs that map to different function values. This measure and variants thereof were shown to imply lower bounds for quantum and randomized decision tree Complexity (or query Complexity) [LM04]. We give a similar result for deterministic decision trees as well. It was later shown in [LLS05] that KI Complexity gives lower bounds for circuit depth. We review those results here, emphasizing simple proofs using Kolmogorov Complexity, instead of strongest possible lower bounds. We also present a Kolmogorov Complexity alternative to Yao's min-max principle [LL04]. As an example, this is applied to randomized one-way communication Complexity.

  • resource bounded Kolmogorov Complexity revisited
    SIAM Journal on Computing, 2002
    Co-Authors: Harry Buhrman, Lance Fortnow, S. Laplante
    Abstract:

    We take a fresh look at CD Complexity, where CDt(x) is the size of the smallest program that distinguishes x from all other strings in time t(|x|). We also look at CND Complexity, a new nondeterministic variant of CD Complexity, and time-bounded Kolmogorov Complexity, denoted by C Complexity. We show several results relating time-bounded C, CD, and CND Complexity and their applications to a variety of questions in computational Complexity theory, including the following: Showing how to approximate the size of a set using CD Complexity without using the random string as needed in Sipser's earlier proof of a similar result. Also, we give a new simpler proof of this result of Sipser's. Improving these bounds for almost all strings, using extractors. A proof of the Valiant--Vazirani lemma directly from Sipser's earlier CD lemma. A relativized lower bound for CND Complexity. Exact characterizations of equivalences between C, CD, and CND Complexity. Showing that satisfying assignments of a satisfiable Boolean formula can be enumerated in time polynomial in the size of the output if and only if a unique assignment can be found quickly. This answers an open question of Papadimitriou. A new Kolmogorov Complexity-based proof that BPP\subseteq\Sigma_2^p$. New Kolmogorov Complexity based constructions of the following relativized worlds: There exists an infinite set in P with no sparse infinite NP subsets. EXP=NEXP but there exists a NEXP machine whose accepting paths cannot be found in exponential time. Satisfying assignments cannot be found with nonadaptive queries to SAT.

  • quantum Kolmogorov Complexity
    Journal of Computer and System Sciences, 2001
    Co-Authors: A. Berthiaume, S. Laplante
    Abstract:

    In this paper we give a definition for quantum Kolmogorov Complexity. In the classical setting, the Kolmogorov Complexity of a string is the length of the shortest program that can produce this string as its output. It is a measure of the amount of innate randomness (or information) contained in the string. We define the quantum Kolmogorov Complexity of a qubit string as the length of the shortest quantum input to a universal quantum Turing machine that produces the initial qubit string with high fidelity. The definition of P. Vitanyi (2001, IEEE Trans. Inform. Theory47, 2464?2479) measures the amount of classical information, whereas we consider the amount of quantum information in a qubit string. We argue that our definition is a natural and accurate representation of the amount of quantum information contained in a quantum state. Recently, P. Gacs (2001, J. Phys. A: Mathematical and General34, 6859?6880) also proposed two measures of quantum algorithmic entropy which are based on the existence of a universal semidensity matrix. The latter definitions are related to Vitanyi's and the one presented in this article, respectively.

  • Quantum Kolmogorov Complexity
    Proceedings 15th Annual IEEE Conference on Computational Complexity, 2000
    Co-Authors: A. Berthiaume, S. Laplante
    Abstract:

    In this paper we give a definition for quantum Kolmogorov Complexity. In the classical setting, the Kolmogorov Complexity of a string is the length of the shortest program that can produce this string as its output. It is a measure of the amount of innate randomness (or information) contained in the string. We define the quantum Kolmogorov Complexity of a qubit string as the length of the shortest quantum input to a universal quantum Turing machine that produces the initial qubit string with high fidelity. The definition of P. Vitanyi (2000) measures the amount of classical information, whereas we consider the amount of quantum information in a qubit string. We argue that our definition is natural and is an accurate representation of the amount of quantum information contained in a quantum state.

Vlatko Vedral - One of the best experts on this subject based on the ideXlab platform.

Caroline Rogers - One of the best experts on this subject based on the ideXlab platform.

Alexander Shen - One of the best experts on this subject based on the ideXlab platform.

  • Logical operations and Kolmogorov Complexity
    Theoretical Computer Science, 2020
    Co-Authors: Alexander Shen, N.k. Vereshchagin
    Abstract:

    AbstractConditional Kolmogorov Complexity K(x|y) can be understood as the Complexity of the problem “Y→X”, where X is the problem “construct x” and Y is the problem “construct y”. Other logical operations (∧,∨,↔) can be interpreted in a similar way, extending Kolmogorov interpretation of intuitionistic logic and Kleene realizability. This leads to interesting problems in algorithmic information theory. Some of these questions are discussed

  • automatic Kolmogorov Complexity and normality revisited
    Fundamentals of Computation Theory, 2017
    Co-Authors: Alexander Shen
    Abstract:

    It is well known that normality (all factors of a given length appear in an infinite sequence with the same frequency) can be described as incompressibility via finite automata. Still the statement and the proof of this result as given by Becher and Heiber (2013) in terms of “lossless finite-state compressors” do not follow the standard scheme of Kolmogorov Complexity definition (an automaton is used for compression, not decompression). We modify this approach to make it more similar to the traditional Kolmogorov Complexity theory (and simpler) by explicitly defining the notion of automatic Kolmogorov Complexity and using its simple properties. Other known notions (Shallit and Wang [15], Calude et al. [8]) of description Complexity related to finite automata are discussed (see the last section).

  • FCT - Automatic Kolmogorov Complexity and Normality Revisited
    Fundamentals of Computation Theory, 2017
    Co-Authors: Alexander Shen
    Abstract:

    It is well known that normality (all factors of a given length appear in an infinite sequence with the same frequency) can be described as incompressibility via finite automata. Still the statement and the proof of this result as given by Becher and Heiber (2013) in terms of “lossless finite-state compressors” do not follow the standard scheme of Kolmogorov Complexity definition (an automaton is used for compression, not decompression). We modify this approach to make it more similar to the traditional Kolmogorov Complexity theory (and simpler) by explicitly defining the notion of automatic Kolmogorov Complexity and using its simple properties. Other known notions (Shallit and Wang [15], Calude et al. [8]) of description Complexity related to finite automata are discussed (see the last section).

  • automatic Kolmogorov Complexity and normality revisited
    arXiv: Information Theory, 2017
    Co-Authors: Alexander Shen
    Abstract:

    It is well known that normality (all factors of given length appear in an infinite sequence with the same frequency) can be described as incompressibility via finite automata. Still the statement and proof of this result as given by Becher and Heiber in terms of "lossless finite-state compressors" do not follow the standard scheme of Kolmogorov Complexity definition (the automaton is used for compression, not decompression). We modify this approach to make it more similar to the traditional Kolmogorov Complexity theory (and simpler) by explicitly defining the notion of automatic Kolmogorov Complexity and using its simple properties. Other known notions (Shallit--Wang, Calude--Salomaa--Roblot) of description Complexity related to finite automata are discussed (see the last section). As a byproduct, we obtain simple proofs of classical results about normality (equivalence of definitions with aligned occurences and all occurencies, Wall's theorem saying that a normal number remains normal when multiplied by a rational number, and Agafonov's result saying that normality is preserved by automatic selection rules).

  • around Kolmogorov Complexity basic notions and results
    arXiv: Information Theory, 2015
    Co-Authors: Alexander Shen
    Abstract:

    Algorithmic information theory studies description Complexity and randomness and is now a well-known field of theoretical computer science and mathematical logic. There are several textbooks and monographs devoted to this theory (Calude, Information and Randomness. An Algorithmic Perspective, 2002, Downey and Hirschfeldt, Algorithmic Randomness and Complexity, 2010, Li and Vitanyi, An Introduction to Kolmogorov Complexity and Its Applications, 2008, Nies, Computability and Randomness, 2009, Vereshchagin et al., Kolmogorov Complexity and Algorithmic Randomness, in Russian, 2013) where one can find a detailed exposition of many difficult results as well as historical references. However, it seems that a short survey of its basic notions and main results relating these notions to each other is missing. This chapter attempts to fill this gap and covers the basic notions of algorithmic information theory : Kolmogorov Complexity (plain, conditional, prefix), Solomonoff universal a priori probability, notions of randomness (Martin-Lof randomness, Mises–Church randomness), and effective Hausdorff dimension . We prove their basic properties (symmetry of information, connection between a priori probability and prefix Complexity, criterion of randomness in terms of Complexity, Complexity characterization for effective dimension) and show some applications ( incompressibility method in computational Complexity theory, incompleteness theorems). The chapter is based on the lecture notes of a course at Uppsala University given by the author (Shen, Algorithmic information theory and Kolmogorov Complexity. Technical Report, 2000).