Efficient Representation

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 124581 Experts worldwide ranked by ideXlab platform

L M Duan - One of the best experts on this subject based on the ideXlab platform.

  • Efficient Representation of quantum many body states with deep neural networks
    Nature Communications, 2017
    Co-Authors: L M Duan
    Abstract:

    Part of the challenge for quantum many-body problems comes from the difficulty of representing large-scale quantum states, which in general requires an exponentially large number of parameters. Neural networks provide a powerful tool to represent quantum many-body states. An important open question is what characterizes the Representational power of deep and shallow neural networks, which is of fundamental interest due to the popularity of deep learning methods. Here, we give a proof that, assuming a widely believed computational complexity conjecture, a deep neural network can Efficiently represent most physical states, including the ground states of many-body Hamiltonians and states generated by quantum dynamics, while a shallow network Representation with a restricted Boltzmann machine cannot Efficiently represent some of those states. One of the challenges in studies of quantum many-body physics is finding an Efficient way to record the large system wavefunctions. Here the authors present an analysis of the capabilities of recently-proposed neural network Representations for storing physically accessible quantum states.

Borut Žalik - One of the best experts on this subject based on the ideXlab platform.

  • Efficient Representation of geometric tree models with level of detail using compressed 3d chain code
    IEEE Transactions on Visualization and Computer Graphics, 2020
    Co-Authors: Damjan Strnad, Stefan Kohek, Andrej Nerat, Borut Žalik
    Abstract:

    In the paper, we present a method for space-Efficient Representation of geometric tree models, which are provided as skeletons with radii attached to individual branch segments. The proposed approach uses a new differential 3D chain code to encode orientation changes of consecutive branch segments, which allows optimizing chain code generation for increased compressibility while maintaining control over the model reconstruction error. The presented method is the first to encode the complete branching geometry including the branch radii and provides level-of-detail construction directly from the chain code. It is demonstrated that by using interpolative encoding of the resulting tree descriptors and radii sequences, the storage requirements for geometric description of a mixed all-aged forest can be reduced to less than 15 percent of its raw size while preserving the structural fidelity of tree models.

  • Efficient Representation of geometric tree models with level of detail using compressed 3d chain code
    IEEE Transactions on Visualization and Computer Graphics, 2019
    Co-Authors: Damjan Strnad, Stefan Kohek, Andrej Nerat, Borut Žalik
    Abstract:

    : In the paper, we present a method for space-Efficient Representation of geometric tree models, which are provided as skeletons with radii attached to individual branch segments. The proposed approach uses a new differential 3D chain code to encode orientation changes of consecutive branch segments, which allows optimizing chain code generation for increased compressibility while maintaining control over the model reconstruction error. The presented method is the first to encode the complete branching geometry including the branch radii and provides level-of-detail construction directly from the chain code. It is demonstrated that by using interpolative encoding of the resulting tree descriptors and radii sequences, the storage requirements for geometric description of a mixed all-aged forest can be reduced to less than 15% of its raw size while preserving the structural fidelity of tree models.

Dario Prandi - One of the best experts on this subject based on the ideXlab platform.

  • visual illusions via neural dynamics wilson cowan type models and the Efficient Representation principle
    Journal of Neurophysiology, 2020
    Co-Authors: Marcelo Bertalmío, Luca Calatroni, Valentina Franceschi, Benedetta Franceschiello, Alexander Gomez Villa, Dario Prandi
    Abstract:

    We show that the Wilson–Cowan equations can reproduce a number of brightness and orientation-dependent illusions. Then we formally prove that there cannot be an energy functional that the Wilson–Co...

  • Visual illusions via neural dynamics: Wilson-Cowan-type models and the Efficient Representation principle
    arXiv: Neurons and Cognition, 2019
    Co-Authors: Marcelo Bertalmío, Alexander Gomez-villa, Luca Calatroni, Valentina Franceschi, Benedetta Franceschiello, Dario Prandi
    Abstract:

    In this work we have aimed to reproduce supra-threshold perception phenomena, specifically visual illusions, with Wilson-Cowan-type models of neuronal dynamics. We have found that it is indeed possible to do so, but that the ability to replicate visual illusions is related to how well the neural activity equations comply with the Efficient Representation principle. Our first contribution is to show that the Wilson-Cowan equations can reproduce a number of brightness and orientation-dependent illusions, and that the latter type of illusions require that the neuronal dynamics equations consider explicitly the orientation, as expected. Then, we formally prove that there can't be an energy functional that the Wilson-Cowan equations are minimizing, but that a slight modification makes them variational and yields a model that is consistent with the Efficient Representation principle. Finally, we show that this new model provides a better reproduction of visual illusions than the original Wilson-Cowan formulation.

Damjan Strnad - One of the best experts on this subject based on the ideXlab platform.

  • Efficient Representation of geometric tree models with level of detail using compressed 3d chain code
    IEEE Transactions on Visualization and Computer Graphics, 2020
    Co-Authors: Damjan Strnad, Stefan Kohek, Andrej Nerat, Borut Žalik
    Abstract:

    In the paper, we present a method for space-Efficient Representation of geometric tree models, which are provided as skeletons with radii attached to individual branch segments. The proposed approach uses a new differential 3D chain code to encode orientation changes of consecutive branch segments, which allows optimizing chain code generation for increased compressibility while maintaining control over the model reconstruction error. The presented method is the first to encode the complete branching geometry including the branch radii and provides level-of-detail construction directly from the chain code. It is demonstrated that by using interpolative encoding of the resulting tree descriptors and radii sequences, the storage requirements for geometric description of a mixed all-aged forest can be reduced to less than 15 percent of its raw size while preserving the structural fidelity of tree models.

  • Efficient Representation of geometric tree models with level of detail using compressed 3d chain code
    IEEE Transactions on Visualization and Computer Graphics, 2019
    Co-Authors: Damjan Strnad, Stefan Kohek, Andrej Nerat, Borut Žalik
    Abstract:

    : In the paper, we present a method for space-Efficient Representation of geometric tree models, which are provided as skeletons with radii attached to individual branch segments. The proposed approach uses a new differential 3D chain code to encode orientation changes of consecutive branch segments, which allows optimizing chain code generation for increased compressibility while maintaining control over the model reconstruction error. The presented method is the first to encode the complete branching geometry including the branch radii and provides level-of-detail construction directly from the chain code. It is demonstrated that by using interpolative encoding of the resulting tree descriptors and radii sequences, the storage requirements for geometric description of a mixed all-aged forest can be reduced to less than 15% of its raw size while preserving the structural fidelity of tree models.

Wojciech Samek - One of the best experts on this subject based on the ideXlab platform.

  • compact and computationally Efficient Representation of deep neural networks
    IEEE Transactions on Neural Networks, 2020
    Co-Authors: Simon Wiedemann, Klausrobert Muller, Wojciech Samek
    Abstract:

    At the core of any inference procedure, deep neural networks are dot product operations, which are the component that requires the highest computational resources. For instance, deep neural networks, such as VGG-16, require up to 15-G operations in order to perform the dot products present in a single forward pass, which results in significant energy consumption and thus limits their use in resource-limited environments, e.g., on embedded devices or smartphones. One common approach to reduce the complexity of the inference is to prune and quantize the weight matrices of the neural network. Usually, this results in matrices whose entropy values are low, as measured relative to the empirical probability mass distribution of its elements. In order to Efficiently exploit such matrices, one usually relies on, inter alia, sparse matrix Representations. However, most of these common matrix storage formats make strong statistical assumptions about the distribution of the elements; therefore, cannot Efficiently represent the entire set of matrices that exhibit low-entropy statistics (thus, the entire set of compressed neural network weight matrices). In this paper, we address this issue and present new Efficient Representations for matrices with low-entropy statistics. Alike sparse matrix data structures, these formats exploit the statistical properties of the data in order to reduce the size and execution complexity. Moreover, we show that the proposed data structures can not only be regarded as a generalization of sparse formats but are also more energy and time Efficient under practically relevant assumptions. Finally, we test the storage requirements and execution performance of the proposed formats on compressed neural networks and compare them to dense and sparse Representations. We experimentally show that we are able to attain up to $\times 42$ compression ratios, $\times 5$ speed ups, and $\times 90$ energy savings when we lossless convert the state-of-the-art networks, such as AlexNet, VGG-16, ResNet152, and DenseNet, into the new data structures and benchmark their respective dot product.

  • compact and computationally Efficient Representation of deep neural networks
    arXiv: Learning, 2018
    Co-Authors: Simon Wiedemann, Klausrobert Muller, Wojciech Samek
    Abstract:

    At the core of any inference procedure in deep neural networks are dot product operations, which are the component that require the highest computational resources. A common approach to reduce the cost of inference is to reduce its memory complexity by lowering the entropy of the weight matrices of the neural network, e.g., by pruning and quantizing their elements. However, the quantized weight matrices are then usually represented either by a dense or sparse matrix storage format, whose associated dot product complexity is not bounded by the entropy of the matrix. This means that the associated inference complexity ultimately depends on the implicit statistical assumptions that these matrix Representations make about the weight distribution, which can be in many cases suboptimal. In this paper we address this issue and present new Efficient Representations for matrices with low entropy statistics. These new matrix formats have the novel property that their memory and algorithmic complexity are implicitly bounded by the entropy of the matrix, consequently implying that they are guaranteed to become more Efficient as the entropy of the matrix is being reduced. In our experiments we show that performing the dot product under these new matrix formats can indeed be more energy and time Efficient under practically relevant assumptions. For instance, we are able to attain up to x42 compression ratios, x5 speed ups and x90 energy savings when we convert in a lossless manner the weight matrices of state-of-the-art networks such as AlexNet, VGG-16, ResNet152 and DenseNet into the new matrix formats and benchmark their respective dot product operation.