Lossy Compression Scheme

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 3159 Experts worldwide ranked by ideXlab platform

Soler Maxime - One of the best experts on this subject based on the ideXlab platform.

  • Réduction et Comparaison de Structures d’Intérêt dans des Jeux de Données Massifs par Analyse Topologique
    HAL CCSD, 2019
    Co-Authors: Soler Maxime
    Abstract:

    In this thesis, we propose different methods, based on Topological Data Analysis, in order to address modern problematics concerning the increasing difficulty in the analysis of scientific data. In the case of scalar data defined on geometrical domains, extracting meaningful knowledge from static data, then time-varying data, then ensembles of time-varying data proves increasingly challenging. Our approaches for the reduction and analysis of such data are based on the idea of defining structures of interest in scalar fields as topological features. In a first effort to address data volume growth, we propose a new Lossy Compression Scheme which offers strong topological guarantees, allowing topological features to be preserved throughout Compression. The approach is shown to yield high Compression factors in practice. Extensions are proposed to offer additional control over the geometrical error. We then target time-varying data by designing a new method for tracking topological features over time, based on topological metrics. We extend the metrics in order to overcome robustness and performance limitations. We propose a new efficient way to compute them, gaining orders of magnitude speedups over state-of-the-art approaches. Finally, we apply and adapt our methods to ensemble data related to reservoir simulation, for modeling viscous fingering in porous media. We show how to capture viscous fingers with topological features, adapt topological metrics for capturing discrepancies between simulation runs and a ground truth, evaluate the proposed metrics with feedback from experts, then implement an in-situ ranking framework for rating the fidelity of simulation runs.Dans cette thèse, nous proposons différentes méthodes, basées sur l’analyse topologique de données, afin de répondre aux problématiques modernes concernant l’analyse de données scientifiques. Dans le cas de données scalaires, extraire un savoir pertinent à partir de données statiques, de données qui varient dans le temps, ou données d’ensembles s’avère de plus en plus difficile. Nos approches pour la réduction et l’analyse de telles données reposent sur l’idée de définir des structures d’intérêt dans les champs scalaires à l’aide d’abstractions topologiques. Dans un premier temps, nous proposons un nouvel algorithme de Compression avec pertes offrant de fortes garanties topologiques, afin de préserver les structures topologiques tout au long de la Compression. Des extensions sont proposées pour offrir un contrôle supplémentaire sur l’erreur géométrique. Nous ciblons ensuite les données variables dans le temps en proposant une nouvelle méthode de suivi des structures topologiques, basée sur des métriques topologiques. Ces métriques sont étendues pour être plus robustes. Nous proposons un nouvel algorithme efficace pour les calculer, obtenant des accélérations de plusieurs ordres de grandeur par rapport aux approches de pointe. Enfin, nous appliquons et adaptons nos méthodes aux données d’ensembles relatives à la simulation de réservoir, dans un cas de digitation visqueuse en milieu poreux. Nous adaptons les métriques topologiques pour quantifier l’écart entre les simulations et la vérité terrain, évaluons les métriques proposées avec le retour d’experts, puis implémentons une méthode de classement in-situ pour évaluer la fidélité des simulations

  • Large data reduction and structure comparison with topological data analysis
    2019
    Co-Authors: Soler Maxime
    Abstract:

    Dans cette thèse, nous proposons différentes méthodes, basées sur l'analyse topologique de données, afin de répondre aux problématiques modernes concernant l'analyse de données scientifiques. Dans le cas de données scalaires, extraire un savoir pertinent à partir de données statiques, de données qui varient dans le temps, ou données d'ensembles s'avère de plus en plus difficile. Nos approches pour la réduction et l'analyse de telles données reposent sur l'idée de définir des structures d'intérêt dans les champs scalaires à l’aide d’abstractions topologiques. Dans un premier temps, nous proposons un nouvel algorithme de Compression avec pertes offrant de fortes garanties topologiques, afin de préserver les structures topologiques tout au long de la Compression. Des extensions sont proposées pour offrir un contrôle supplémentaire sur l'erreur géométrique. Nous ciblons ensuite les données variables dans le temps en proposant une nouvelle méthode de suivi des structures topologiques, basée sur des métriques topologiques. Ces métriques sont étendues pour être plus robustes. Nous proposons un nouvel algorithme efficace pour les calculer, obtenant des accélérations de plusieurs ordres de grandeur par rapport aux approches de pointe. Enfin, nous appliquons et adaptons nos méthodes aux données d'ensemble relatives à la simulation de réservoir, dans un cas de digitation visqueuse en milieu poreux. Nous adaptons les métriques topologiques pour quantifier l’écart entre les simulations et la vérité terrain, évaluons les métriques proposées avec le retour d’experts, puis implémentons une méthode de classement in-situ pour évaluer la fidélité des simulations.In this thesis, we propose different methods, based on topological data analysis, in order to address modern problematics concerning the increasing difficulty in the analysis of scientific data. In the case of scalar data defined on geometrical domains, extracting meaningful knowledge from static data, then time-varying data, then ensembles of time-varying data proves increasingly challenging. Our approaches for the reduction and analysis of such data are based on the idea of defining structures of interest in scalar fields as topological features. In a first effort to address data volume growth, we propose a new Lossy Compression Scheme which offers strong topological guarantees, allowing topological features to be preserved throughout Compression. The approach is shown to yield high Compression factors in practice. Extensions are proposed to offer additional control over the geometrical error. We then target time-varying data by designing a new method for tracking topological features over time, based on topological metrics. We extend the metrics in order to overcome robustness and performance limitations. We propose a new efficient way to compute them, gaining orders of magnitude speedups over state-of-the-art approaches. Finally, we apply and adapt our methods to ensemble data related to reservoir simulation, for modeling viscous fingering in porous media. We show how to capture viscous fingers with topological features, adapt topological metrics for capturing discrepancies between simulation runs and a ground truth, evaluate the proposed metrics with feedback from experts, then implement an in-situ ranking framework for rating the fidelity of simulation runs

  • Réduction et comparaison de structures d'intérêt dans des jeux de données massifs par analyse topologique
    HAL CCSD, 2019
    Co-Authors: Soler Maxime
    Abstract:

    In this thesis, we propose different methods, based on topological data analysis, in order to address modern problematics concerning the increasing difficulty in the analysis of scientific data. In the case of scalar data defined on geometrical domains, extracting meaningful knowledge from static data, then time-varying data, then ensembles of time-varying data proves increasingly challenging. Our approaches for the reduction and analysis of such data are based on the idea of defining structures of interest in scalar fields as topological features. In a first effort to address data volume growth, we propose a new Lossy Compression Scheme which offers strong topological guarantees, allowing topological features to be preserved throughout Compression. The approach is shown to yield high Compression factors in practice. Extensions are proposed to offer additional control over the geometrical error. We then target time-varying data by designing a new method for tracking topological features over time, based on topological metrics. We extend the metrics in order to overcome robustness and performance limitations. We propose a new efficient way to compute them, gaining orders of magnitude speedups over state-of-the-art approaches. Finally, we apply and adapt our methods to ensemble data related to reservoir simulation, for modeling viscous fingering in porous media. We show how to capture viscous fingers with topological features, adapt topological metrics for capturing discrepancies between simulation runs and a ground truth, evaluate the proposed metrics with feedback from experts, then implement an in-situ ranking framework for rating the fidelity of simulation runs.Dans cette thèse, nous proposons différentes méthodes, basées sur l'analyse topologique de données, afin de répondre aux problématiques modernes concernant l'analyse de données scientifiques. Dans le cas de données scalaires, extraire un savoir pertinent à partir de données statiques, de données qui varient dans le temps, ou données d'ensembles s'avère de plus en plus difficile. Nos approches pour la réduction et l'analyse de telles données reposent sur l'idée de définir des structures d'intérêt dans les champs scalaires à l’aide d’abstractions topologiques. Dans un premier temps, nous proposons un nouvel algorithme de Compression avec pertes offrant de fortes garanties topologiques, afin de préserver les structures topologiques tout au long de la Compression. Des extensions sont proposées pour offrir un contrôle supplémentaire sur l'erreur géométrique. Nous ciblons ensuite les données variables dans le temps en proposant une nouvelle méthode de suivi des structures topologiques, basée sur des métriques topologiques. Ces métriques sont étendues pour être plus robustes. Nous proposons un nouvel algorithme efficace pour les calculer, obtenant des accélérations de plusieurs ordres de grandeur par rapport aux approches de pointe. Enfin, nous appliquons et adaptons nos méthodes aux données d'ensemble relatives à la simulation de réservoir, dans un cas de digitation visqueuse en milieu poreux. Nous adaptons les métriques topologiques pour quantifier l’écart entre les simulations et la vérité terrain, évaluons les métriques proposées avec le retour d’experts, puis implémentons une méthode de classement in-situ pour évaluer la fidélité des simulations

Wenjun Zeng - One of the best experts on this subject based on the ideXlab platform.

  • Scalable Lossy Compression for Pixel-Value Encrypted Images
    2012 Data Compression Conference, 2012
    Co-Authors: Xiangui Kang, Xianyu Xu, Anjie Peng, Wenjun Zeng
    Abstract:

    Compression of encrypted data draws much attention in recent years due to the security concerns in a service oriented environment such as cloud computing. We propose a scalable Lossy Compression Scheme for images having their pixel value encrypted with a standard stream cipher. The encrypted data are simply compressed by transmitting a uniformly sub sampled portion of the encrypted data and some bit-planes of another uniformly sub sampled portion of the encrypted data. With a proposed content adaptive interpolation prediction method with side information, at the receiver side, a decoder performs content adaptive interpolation based on the decrypted partial information, where the received bit-plane information serves as the side information that reflects the image edge information, making the image reconstruction more precise. When more bit-planes are transmitted, higher quality of the decompressed image can be achieved. The experimental results show that our proposed Scheme achieves much better performance than the existing Lossy Compression Scheme for pixel value encrypted images, and also similar performance as the state-of-the-art Lossy Compression for pixel permutation based encrypted images. In addition, our proposed Scheme has the following advantages: at the decoder side, no computationally intensive iteration and no additional public orthogonal matrix is needed. It works well for both smooth and texture-rich images.

Xiangui Kang - One of the best experts on this subject based on the ideXlab platform.

  • performing scalable Lossy Compression on pixel encrypted images
    Eurasip Journal on Image and Video Processing, 2013
    Co-Authors: Xiangui Kang, Anjie Peng, Xianyu Xu
    Abstract:

    Compression of encrypted data draws much attention in recent years due to the security concerns in a service-oriented environment such as cloud computing. We propose a scalable Lossy Compression Scheme for images having their pixel value encrypted with a standard stream cipher. The encrypted data are simply compressed by transmitting a uniformly subsampled portion of the encrypted data and some bitplanes of another uniformly subsampled portion of the encrypted data. At the receiver side, a decoder performs content-adaptive interpolation based on the decrypted partial information, where the received bit plane information serves as the side information that reflects the image edge information, making the image reconstruction more precise. When more bit planes are transmitted, higher quality of the decompressed image can be achieved. The experimental results show that our proposed Scheme achieves much better performance than the existing Lossy Compression Scheme for pixel-value encrypted images and also similar performance as the state-of-the-art Lossy Compression for pixel permutation-based encrypted images. In addition, our proposed Scheme has the following advantages: at the decoder side, no computationally intensive iteration and no additional public orthogonal matrix are needed. It works well for both smooth and texture-rich images.

  • Scalable Lossy Compression for Pixel-Value Encrypted Images
    2012 Data Compression Conference, 2012
    Co-Authors: Xiangui Kang, Xianyu Xu, Anjie Peng, Wenjun Zeng
    Abstract:

    Compression of encrypted data draws much attention in recent years due to the security concerns in a service oriented environment such as cloud computing. We propose a scalable Lossy Compression Scheme for images having their pixel value encrypted with a standard stream cipher. The encrypted data are simply compressed by transmitting a uniformly sub sampled portion of the encrypted data and some bit-planes of another uniformly sub sampled portion of the encrypted data. With a proposed content adaptive interpolation prediction method with side information, at the receiver side, a decoder performs content adaptive interpolation based on the decrypted partial information, where the received bit-plane information serves as the side information that reflects the image edge information, making the image reconstruction more precise. When more bit-planes are transmitted, higher quality of the decompressed image can be achieved. The experimental results show that our proposed Scheme achieves much better performance than the existing Lossy Compression Scheme for pixel value encrypted images, and also similar performance as the state-of-the-art Lossy Compression for pixel permutation based encrypted images. In addition, our proposed Scheme has the following advantages: at the decoder side, no computationally intensive iteration and no additional public orthogonal matrix is needed. It works well for both smooth and texture-rich images.

Bin Yang - One of the best experts on this subject based on the ideXlab platform.

  • A new Lossy Compression Scheme for encrypted gray-scale images
    2014 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 2014
    Co-Authors: Ran Hu, Xiaolong Li, Bin Yang
    Abstract:

    Compression of encrypted data has attracted considerable research interests nowadays due to distributed processing and cloud computing. In this work, we propose a novel Lossy Compression Scheme for encrypted gray-scale images. The original image is first divided into non-overlapping blocks. Then, it is encrypted by a modulo-256 addition and block permutation. In Compression phase, the spatial correlation and quantization are exploited to reduce the Compression ratio. At the decoder side, context-adaptive interpolation with an image-dependent threshold is used to make image reconstruction precise. Experimental results show that the proposed Scheme achieves better performance compared to the previous work.

Tsachy Weissman - One of the best experts on this subject based on the ideXlab platform.

  • Rate-distortion in near-linear time
    2008 IEEE International Symposium on Information Theory, 2008
    Co-Authors: Ankit Gupta, Sergio Verdu, Tsachy Weissman
    Abstract:

    We present two results related to the computational complexity of Lossy Compression. The first result shows that for a memoryless source Ps with rate-distortion function R(D), the rate-distortion pair (R(D) + gamma, D + isin) can be achieved with constant decoding time per symbol and encoding time per symbol proportional to C1(gamma)isin-C2(gamma). The second results establishes that for any given R, there exists a universal Lossy Compression Scheme with O(ng(n)) encoding complexity and O(n) decoding complexity, that achieves the point (R,D(R)) asymptotically for any ergodic source with distortion-rate function D(.), where g(n) is an arbitrary non-decreasing unbounded function. A computationally feasible implementation of the first Scheme outperforms many of the best previously proposed Schemes for binary sources with blocklengths of the order of 1000.