Fisher Information

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 46161 Experts worldwide ranked by ideXlab platform

Xiaoguang Wang - One of the best experts on this subject based on the ideXlab platform.

  • quantum Fisher Information in noninertial frames
    Physical Review A, 2014
    Co-Authors: Yao Yao, Xiaoguang Wang, Xing Xiao, C P Sun
    Abstract:

    We investigate the performance of quantum Fisher Information under the Unruh-Hawking effect, where one of the observers (eg, Rob) is uniformly accelerated with respect to other partners. In the context of relativistic quantum Information theory, we demonstrate that quantum Fisher Information, as an important measure of the Information content of quantum states, has a rich and subtle physical structure comparing with entanglement or Bell nonlocality. In this work, we mainly focus on the parameterized (and arbitrary) pure two-qubit states, where the weight parameter $\theta$ and phase parameter $\phi$ are naturally introduced. Intriguingly, we prove that $\mathcal{F}_\theta$ keeps unchanged for both scalar and Dirac fields. Meanwhile, we observe that $\mathcal{F}_\phi$ decreases with the increase of acceleration $r$ but remains finite in the limit of infinite acceleration. More importantly, our results show that the symmetry of $\mathcal{F}_\phi$ (with respect to $\theta=\pi/4$) has been broken by the influence of Unruh effect for both cases.

  • quantum Fisher Information for density matrices with arbitrary ranks
    arXiv: Quantum Physics, 2013
    Co-Authors: Jing Liu, Wei Zhong, Xiaoxing Jing, Xiaoguang Wang
    Abstract:

    We provide a new expression of the quantum Fisher Information(QFI) for a general system. Utilizing this expression, the QFI for a non-full rank density matrix is only determined by its support. This expression can bring convenience for a infinite dimensional density matrix with a finite support. Besides, a matrix representation of the QFI is also given.

  • Fisher Information under decoherence in bloch representation
    Physical Review A, 2013
    Co-Authors: Wei Zhong, Xiaoguang Wang, Zhe Sun, Franco Nori
    Abstract:

    The dynamics of two variants of quantum Fisher Information under decoherence is investigated from a geometrical point of view. We first derive the explicit formulas of these two quantities for a single qubit in terms of the Bloch vector. Moreover, we obtain analytical results for them under three different decoherence channels, which are expressed as affine transformation matrices. Using the hierarchy equation method, we numerically study the dynamics of both sets of Information in a dissipative model and compare the numerical results with the analytical ones obtained by applying the rotating-wave approximation. We further express the two Information quantities in terms of the Bloch vector for a qudit by expanding the density matrix and Hermitian operators in a common set of generators of the Lie algebra $\text{su}(d)$. By calculating the dynamical quantum Fisher Information, we find that the collisional dephasing significantly diminishes the precision of the phase parameter with the Ramsey interferometry.

  • quantum Fisher Information of the greenberg horne zeilinger state in decoherence channels
    Physical Review A, 2011
    Co-Authors: Yixiao Huang, Xiaoguang Wang, C P Sun
    Abstract:

    Quantum Fisher Information of a parameter characterizes the sensitivity of the state with respect to changes of the parameter. In this article, we study the quantum Fisher Information of a state with respect to SU(2) rotations under three decoherence channels: the amplitude-damping, phase-damping, and depolarizing channels. The initial state is chosen to be a Greenberger-Horne-Zeilinger state of which the phase sensitivity can achieve the Heisenberg limit. By using the Kraus operator representation, the quantum Fisher Information is obtained analytically. We observe the decay and sudden change of the quantum Fisher Information in all three channels.

  • Fisher Information and spin squeezing in the lipkin meshkov glick model
    Physical Review A, 2009
    Co-Authors: Xiaoguang Wang
    Abstract:

    Fisher Information, which lies at the heart of parameter estimation theory, was recently found to have a close relation with multipartite entanglement [L. Pezz\'e and A. Smerzi, Phys. Rev. Lett. 102, 100401 (2009)]. We use Fisher Information to distinguish and characterize behaviors of ground state of the Lipkin-Meskhov-Glick model, which displays a second-order quantum phase transition between the broken and symmetric phases. We find that the parameter sensitivity of the system attains the Heisenberg limit in the broken phase, while it is just around the shot-noise limit in the symmetric phase. Based on parameter estimation, Fisher Information provides us a useful approach to the quantum phase transition.

Ryo Karakida - One of the best experts on this subject based on the ideXlab platform.

  • Understanding Approximate Fisher Information for Fast Convergence of Natural Gradient Descent in Wide Neural Networks
    arXiv: Machine Learning, 2020
    Co-Authors: Ryo Karakida, Kazuki Osawa
    Abstract:

    Natural Gradient Descent (NGD) helps to accelerate the convergence of gradient descent dynamics, but it requires approximations in large-scale deep neural networks because of its high computational cost. Empirical studies have confirmed that some NGD methods with approximate Fisher Information converge sufficiently fast in practice. Nevertheless, it remains unclear from the theoretical perspective why and under what conditions such heuristic approximations work well. In this work, we reveal that, under specific conditions, NGD with approximate Fisher Information achieves the same fast convergence to global minima as exact NGD. We consider deep neural networks in the infinite-width limit, and analyze the asymptotic training dynamics of NGD in function space via the neural tangent kernel. In the function space, the training dynamics with the approximate Fisher Information are identical to those with the exact Fisher Information, and they converge quickly. The fast convergence holds in layer-wise approximations; for instance, in block diagonal approximation where each block corresponds to a layer as well as in block tri-diagonal and K-FAC approximations. We also find that a unit-wise approximation achieves the same fast convergence under some assumptions. All of these different approximations have an isotropic gradient in the function space, and this plays a fundamental role in achieving the same convergence properties in training. Thus, the current study gives a novel and unified theoretical foundation with which to understand NGD methods in deep learning.

  • Fisher Information and natural gradient learning in random deep networks
    International Conference on Artificial Intelligence and Statistics, 2019
    Co-Authors: Shun-ichi Amari, Ryo Karakida, Masafumi Oizumi
    Abstract:

    The parameter space of a deep neural network is a Riemannian manifold, where the metric is defined by the Fisher Information matrix. The natural gradient method uses the steepest descent direction in a Riemannian manifold, but it requires inversion of the Fisher matrix, however, which is practically difficult. The present paper uses statistical neurodynamical method to reveal the properties of the Fisher Information matrix in a net of random connections. We prove that the Fisher Information matrix is unit-wise block diagonal supplemented by small order terms of off-block-diagonal elements. We further prove that the Fisher Information matrix of a single unit has a simple reduced form, a sum of a diagonal matrix and a rank 2 matrix of weight-bias correlations. We obtain the inverse of Fisher Information explicitly. We then have an explicit form of the approximate natural gradient, without relying on the matrix inversion.

Kazuki Osawa - One of the best experts on this subject based on the ideXlab platform.

  • Understanding Approximate Fisher Information for Fast Convergence of Natural Gradient Descent in Wide Neural Networks
    arXiv: Machine Learning, 2020
    Co-Authors: Ryo Karakida, Kazuki Osawa
    Abstract:

    Natural Gradient Descent (NGD) helps to accelerate the convergence of gradient descent dynamics, but it requires approximations in large-scale deep neural networks because of its high computational cost. Empirical studies have confirmed that some NGD methods with approximate Fisher Information converge sufficiently fast in practice. Nevertheless, it remains unclear from the theoretical perspective why and under what conditions such heuristic approximations work well. In this work, we reveal that, under specific conditions, NGD with approximate Fisher Information achieves the same fast convergence to global minima as exact NGD. We consider deep neural networks in the infinite-width limit, and analyze the asymptotic training dynamics of NGD in function space via the neural tangent kernel. In the function space, the training dynamics with the approximate Fisher Information are identical to those with the exact Fisher Information, and they converge quickly. The fast convergence holds in layer-wise approximations; for instance, in block diagonal approximation where each block corresponds to a layer as well as in block tri-diagonal and K-FAC approximations. We also find that a unit-wise approximation achieves the same fast convergence under some assumptions. All of these different approximations have an isotropic gradient in the function space, and this plays a fundamental role in achieving the same convergence properties in training. Thus, the current study gives a novel and unified theoretical foundation with which to understand NGD methods in deep learning.

Haidong Yuan - One of the best experts on this subject based on the ideXlab platform.

  • maximal quantum Fisher Information matrix
    New Journal of Physics, 2017
    Co-Authors: Yu Chen, Haidong Yuan
    Abstract:

    We study the existence of the maximal quantum Fisher Information matrix in the multi-parameter quantum estimation, which bounds the ultimate precision limit. We show that when the maximal quantum Fisher Information matrix exists, it can be directly obtained from the underlying dynamics. Examples are then provided to demonstrate the usefulness of the maximal quantum Fisher Information matrix by deriving various trade-off relations in multi-parameter quantum estimation and obtaining the bounds for the scalings of the precision limit.

  • maximal quantum Fisher Information matrix
    arXiv: Quantum Physics, 2017
    Co-Authors: Yu Chen, Haidong Yuan
    Abstract:

    We study the existence of the maximal quantum Fisher Information matrix in multi-parameter quantum estimation, which bounds the ultimate precision limit. We show that when the maximal quantum Fisher Information matrix exists, it can be directly obtained from the underlying dynamics. Examples are then provided to demonstrate the usefulness of the maximal quantum Fisher Information matrix by deriving various tradeoff relations in multi-parameter quantum estimation and obtaining the bounds for the scalings of the precision limit.

Shun-ichi Amari - One of the best experts on this subject based on the ideXlab platform.

  • Fisher Information and natural gradient learning in random deep networks
    International Conference on Artificial Intelligence and Statistics, 2019
    Co-Authors: Shun-ichi Amari, Ryo Karakida, Masafumi Oizumi
    Abstract:

    The parameter space of a deep neural network is a Riemannian manifold, where the metric is defined by the Fisher Information matrix. The natural gradient method uses the steepest descent direction in a Riemannian manifold, but it requires inversion of the Fisher matrix, however, which is practically difficult. The present paper uses statistical neurodynamical method to reveal the properties of the Fisher Information matrix in a net of random connections. We prove that the Fisher Information matrix is unit-wise block diagonal supplemented by small order terms of off-block-diagonal elements. We further prove that the Fisher Information matrix of a single unit has a simple reduced form, a sum of a diagonal matrix and a rank 2 matrix of weight-bias correlations. We obtain the inverse of Fisher Information explicitly. We then have an explicit form of the approximate natural gradient, without relying on the matrix inversion.

  • Fisher Information for spike based population decoding
    Physical Review Letters, 2006
    Co-Authors: Taro Toyoizumi, Kazuyuki Aihara, Shun-ichi Amari
    Abstract:

    Information from the environment is encoded in the noisy activity of a population of neurons. Reading the neural code is a fundamental problem in neuroscience. In particular, attention has been paid to the role of precise spike timing in addition to the spike count Information [1,2] and that of correlation between neurons [3–5 ]o n the Information coding. However, a large amount of data is required to evaluate the amount of Information in a real population of neurons [1,2]. Moreover, it is difficult, with this approach, to understand the role of neuronal parameters such as recurrent connectivity on the Information coding. In this study, we calculated the Fisher Information of a network of spiking neurons, which limits the accuracy of any unbiased estimate of a stimulus [6,7]. In contrast to the literature, where Fisher Information is evaluated based on firing rates (rate-based Fisher Information) [3–8], we evaluated the Fisher Information when the individual spike timings of all the neurons were available (spike-based Fisher Information). Interestingly, under the assumption of independent noise, the expression of the spike-based Information has a simple analytical form. We estimated the amount of Information included in the precise timing of spikes by comparing the Information in the two cases and studied the role of synaptic connectivity on stimulus estimation. We also calculated the spike-based Information for a spatiotemporal input, where rate decoding fails, and derived the optimal recurrent connectivity for spike-based Information representation. Stochastic firing neuron model. From the noisy spik