Usual Topology

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 7281 Experts worldwide ranked by ideXlab platform

S R S Varadhan - One of the best experts on this subject based on the ideXlab platform.

  • brownian occupation measures compactness and large deviations
    Weierstrass Institute for Applied Analysis and Stochastics: Preprint 2193, 2015
    Co-Authors: Chiranjib Mukherjee, S R S Varadhan
    Abstract:

    In proving large deviation estimates, the lower bound for open sets and upper bound for compact sets are essentially local estimates. On the other hand, the upper bound for closed sets is global and compactness of space or an exponential tightness estimate is needed to establish it. In dealing with the occupation measure Lt(A)=1t∫t01A(Ws)dsLt(A)=1t∫0t1A(Ws)ds of the dd-dimensional Brownian motion, which is not positive recurrent, there is no possibility of exponential tightness. The space of probability distributions M1(Rd)M1(Rd) can be compactified by replacing the Usual Topology of weak convergence by the vague toplogy, where the space is treated as the dual of continuous functions with compact support. This is essentially the one point compactification of RdRd by adding a point at ∞∞ that results in the compactification of M1(Rd)M1(Rd) by allowing some mass to escape to the point at ∞∞. If one were to use only test functions that are continuous and vanish at ∞∞, then the compactification results in the space of sub-probability distributions M≤1(Rd)M≤1(Rd) by ignoring the mass at ∞∞. The main drawback of this compactification is that it ignores the underlying translation invariance. More explicitly, we may be interested in the space of equivalence classes of orbits M~1=M~1(Rd)M~1=M~1(Rd) under the action of the translation group RdRd on M1(Rd)M1(Rd). There are problems for which it is natural to compactify this space of orbits. We will provide such a compactification, prove a large deviation principle there and give an application to a relevant problem.

  • brownian occupation measures compactness and large deviations
    arXiv: Probability, 2014
    Co-Authors: Chiranjib Mukherjee, S R S Varadhan
    Abstract:

    In proving large deviation estimates, the lower bound for open sets and upper bound for compact sets are essentially local estimates. On the other hand, the upper bound for closed sets is global and compactness of space or an exponential tightness estimate is needed to establish it. In dealing with the occupation measure $L_t(A)=\frac{1}{t}\int_0^t{\1}_A(W_s) \d s$ of the $d$ dimensional Brownian motion, which is not positive recurrent, there is no possibility of exponential tightness. The space of probability distributions $\mathcal {M}_1(\R^d)$ can be compactified by replacing the Usual Topology of weak convergence by the vague toplogy, where the space is treated as the dual of continuous functions with compact support. This is essentially the one point compactification of $\R^d$ by adding a point at $\infty$ that results in the compactification of $\mathcal M_1(\R^d)$ by allowing some mass to escape to the point at $\infty$. If one were to use only test functions that are continuous and vanish at $\infty$ then the compactification results in the space of sub-probability distributions $\mathcal {M}_{\le 1}(\R^d)$ by ignoring the mass at $\infty$. The main drawback of this compactification is that it ignores the underlying translation invariance. More explicitly, we may be interested in the space of equivalence classes of orbits $\widetilde{\mathcal M}_1=\widetilde{\mathcal M}_1(\R^d)$ under the action of the translation group $\R^d$ on $\mathcal M_1(\R^d)$. There are problems for which it is natural to compactify this space of orbits. We will provide such a compactification, prove a large deviation principle there and give an application to a relevant problem.

Jonni Virtema - One of the best experts on this subject based on the ideXlab platform.

  • Descriptive complexity of real computation and probabilistic independence logic
    Proceedings of the 35th Annual ACM IEEE Symposium on Logic in Computer Science, 2020
    Co-Authors: Miika Hannula, Juha Kontinen, Jan Van Den Bussche, Jonni Virtema
    Abstract:

    We introduce a novel variant of BSS machines called Separate Branching BSS machines (S-BSS in short) and develop a Fagin-type logical characterisation for languages decidable in non-deterministic polynomial time by S-BSS machines. We show that NP on S-BSS machines is strictly included in NP on BSS machines and that every NP language on S-BSS machines is a countable union of closed sets in the Usual Topology of R^n. Moreover, we establish that on Boolean inputs NP on S-BSS machines without real constants characterises a natural fragment of the complexity class existsR (a class of problems polynomial time reducible to the true existential theory of the reals) and hence lies between NP and PSPACE. Finally we apply our results to determine the data complexity of probabilistic independence logic.

  • descriptive complexity of real computation and probabilistic independence logic
    Logic in Computer Science, 2020
    Co-Authors: Miika Hannula, Juha Kontinen, Jan Van Den Bussche, Jonni Virtema
    Abstract:

    We introduce a novel variant of BSS machines called Separate Branching BSS machines (S-BSS in short) and develop a Fagin-type logical characterisation for languages decidable in nondeterministic polynomial time by S-BSS machines. We show that NP on S-BSS machines is strictly included in NP on BSS machines and that every NP language on S-BSS machines is a countable disjoint union of closed sets in the Usual Topology of Rn. Moreover, we establish that on Boolean inputs NP on S-BSS machines without real constants characterises a natural fragment of the complexity class ∃R (a class of problems polynomial time reducible to the true existential theory of the reals) and hence lies between NP and PSPACE. Finally we apply our results to determine the data complexity of probabilistic independence logic.

Chiranjib Mukherjee - One of the best experts on this subject based on the ideXlab platform.

  • brownian occupation measures compactness and large deviations
    Weierstrass Institute for Applied Analysis and Stochastics: Preprint 2193, 2015
    Co-Authors: Chiranjib Mukherjee, S R S Varadhan
    Abstract:

    In proving large deviation estimates, the lower bound for open sets and upper bound for compact sets are essentially local estimates. On the other hand, the upper bound for closed sets is global and compactness of space or an exponential tightness estimate is needed to establish it. In dealing with the occupation measure Lt(A)=1t∫t01A(Ws)dsLt(A)=1t∫0t1A(Ws)ds of the dd-dimensional Brownian motion, which is not positive recurrent, there is no possibility of exponential tightness. The space of probability distributions M1(Rd)M1(Rd) can be compactified by replacing the Usual Topology of weak convergence by the vague toplogy, where the space is treated as the dual of continuous functions with compact support. This is essentially the one point compactification of RdRd by adding a point at ∞∞ that results in the compactification of M1(Rd)M1(Rd) by allowing some mass to escape to the point at ∞∞. If one were to use only test functions that are continuous and vanish at ∞∞, then the compactification results in the space of sub-probability distributions M≤1(Rd)M≤1(Rd) by ignoring the mass at ∞∞. The main drawback of this compactification is that it ignores the underlying translation invariance. More explicitly, we may be interested in the space of equivalence classes of orbits M~1=M~1(Rd)M~1=M~1(Rd) under the action of the translation group RdRd on M1(Rd)M1(Rd). There are problems for which it is natural to compactify this space of orbits. We will provide such a compactification, prove a large deviation principle there and give an application to a relevant problem.

  • brownian occupation measures compactness and large deviations
    arXiv: Probability, 2014
    Co-Authors: Chiranjib Mukherjee, S R S Varadhan
    Abstract:

    In proving large deviation estimates, the lower bound for open sets and upper bound for compact sets are essentially local estimates. On the other hand, the upper bound for closed sets is global and compactness of space or an exponential tightness estimate is needed to establish it. In dealing with the occupation measure $L_t(A)=\frac{1}{t}\int_0^t{\1}_A(W_s) \d s$ of the $d$ dimensional Brownian motion, which is not positive recurrent, there is no possibility of exponential tightness. The space of probability distributions $\mathcal {M}_1(\R^d)$ can be compactified by replacing the Usual Topology of weak convergence by the vague toplogy, where the space is treated as the dual of continuous functions with compact support. This is essentially the one point compactification of $\R^d$ by adding a point at $\infty$ that results in the compactification of $\mathcal M_1(\R^d)$ by allowing some mass to escape to the point at $\infty$. If one were to use only test functions that are continuous and vanish at $\infty$ then the compactification results in the space of sub-probability distributions $\mathcal {M}_{\le 1}(\R^d)$ by ignoring the mass at $\infty$. The main drawback of this compactification is that it ignores the underlying translation invariance. More explicitly, we may be interested in the space of equivalence classes of orbits $\widetilde{\mathcal M}_1=\widetilde{\mathcal M}_1(\R^d)$ under the action of the translation group $\R^d$ on $\mathcal M_1(\R^d)$. There are problems for which it is natural to compactify this space of orbits. We will provide such a compactification, prove a large deviation principle there and give an application to a relevant problem.

Hansen H.h. - One of the best experts on this subject based on the ideXlab platform.

  • Well-definedness and observational equivalence for inductive-coinductive programs
    'Oxford University Press (OUP)', 2019
    Co-Authors: Basold Henning, Hansen H.h.
    Abstract:

    We define notions of well-definedness and observational equivalence for programs of mixed inductive and coinductive types. These notions are defined by means of tests formulas which combine structural congruence for inductive types and modal logic for coinductive types. Tests also correspond to certain evaluation contexts. We define a program to be well-defined if it is strongly normalizing under all tests, and two programs are observationally equivalent if they satisfy the same tests. We show that observational equivalence is sufficiently coarse to ensure that least and greatest fixed point types are initial algebras and final coalgebras, respectively. This yields inductive and coinductive proof principles for reasoning about program behaviour. On the other hand, we argue that observational equivalence does not identify too many terms, by showing that tests induce a Topology that, on streams, coincides with Usual Topology induced by the prefix metric. As one would expect, observational equivalence is, in general, undecidable, but in order to develop some practically useful heuristics we provide coinductive techniques for establishing observational normalization and observational equivalence, along with up-to techniques for enhancing these methods.Energy & Industr

Helle Hvid Hansen - One of the best experts on this subject based on the ideXlab platform.

  • Well-definedness and observational equivalence for inductive-coinductive programs
    Journal of Logic and Computation, 2019
    Co-Authors: Henning Basold, Helle Hvid Hansen
    Abstract:

    htmlabstract We define notions of well-definedness and observational equivalence for programs of mixed inductive and coinductive types. These notions are defined by means of tests formulas which combine structural congruence for inductive types and modal logic for coinductive types. Tests also correspond to certain evaluation contexts. We define a program to be well-defined if it is strongly normalizing under all tests, and two programs are observationally equivalent if they satisfy the same tests. We show that observational equivalence is sufficiently coarse to ensure that least and greatest fixed point types are initial algebras and final coalgebras, respectively. This yields inductive and coinductive proof principles for reasoning about program behaviour. On the other hand, we argue that observational equivalence does not identify too many terms, by showing that tests induce a Topology that, on streams, coincides with Usual Topology induced by the prefix metric. As one would expect, observational equivalence is, in general, undecidable, but in order to develop some practically useful heuristics we provide coinductive techniques for establishing observational normalization and observational equivalence, along with up-to techniques for enhancing these methods.