virtual machine image

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 4272 Experts worldwide ranked by ideXlab platform

Tao Huang - One of the best experts on this subject based on the ideXlab platform.

  • clustering based acceleration for virtual machine image deduplication in the cloud environment
    Journal of Systems and Software, 2016
    Co-Authors: Wenbo Zhang, Zhenyu Zhang, Tao Wang, Tao Huang
    Abstract:

    Use a clustering-based classification to reduce the fingerprint search space.Take the image content layout into consideration during image deduplication.Propose periodical triggering and small group merging to facilitate VM deduplication.Evaluate the effectiveness, efficiency and robustness of the proposed method. More and more virtual machine (VM) images are continuously created in datacenters. Duplicated data segments may exist in such VM images, and it leads to a waste of storage resource. As a result, VM image deduplication is a common daily activity in datacenters. Our previous work Crab is such a product and it is on duty regularly in our datacenter.The size of VM images is large and the amount of VM images is huge, and it is inefficient and impractical to load massive VM image fingerprints into memory for a fast comparison to recognize duplicated segments. To address this issue, we in this paper propose a clustering-based acceleration method. It uses an improved k-means clustering to find images having high chances to contain duplicated segments. With such a candidate selection phase, only limited VM image candidate fingerprints are loaded into memory.We empirically evaluate the effectiveness, robustness, and complexity of the proposed system. Experimental results show that it significantly reduces the performance interference to hosting virtual machine with an acceptable increase in disk space usage, compared with existing deduplication methods.

  • a lightweight virtual machine image deduplication backup approach in cloud environment
    Computer Software and Applications Conference, 2014
    Co-Authors: Wenbo Zhang, Jun Wei, Tao Huang
    Abstract:

    As most clouds are based on virtualization technology, more and more virtual machine images are created within data centers. Depending on the need of disaster recovery, the storage space used for backup would easily sprawl to a TB or PB level with the growth of images. Unfortunately, different images have a large amount of same data segments. Those duplicated data segments will lead to serious waste of storage resource. Although there is a lot of work focus on deduplication storage and could achieve a good result in removing duplicate copies, they are not very suitable for virtual machine image deduplication in a cloud environment. Because huge resource usage of deduplication operations could lead to serious performance interference to the hosting virtual machines. This paper propose a local deduplication method which can speed up the operation progress of virtual machine image deduplication and reduce the operation time. The method is based on an improved k-means clustering algorithm, which could classify the metadata of backup image to reduce the search space of index lookup and improve the index lookup performance. Experiments show that our approach is robust and effective. It can significantly reduce the performance interference to hosting virtual machine with an acceptable increase in disk space usage.

  • vm image update notification mechanism based on pub sub paradigm in cloud
    Asia-Pacific Symposium on Internetware, 2013
    Co-Authors: Wenbo Zhang, Jun Wei, Tao Huang
    Abstract:

    virtual machine image encapsulates the whole software stack including operating system, middleware, user application and other software products. Failure occurred in any layer of the software stack will be treated as image failure. However, virtual machine image with potential failures can be convert to template and spread to a wide range by means of template replication. And this paper refer to this phenomenon as "image failure propagation". Usually, patching is a widely adopted solution to resolve software failures. Nevertheless, virtual machine image patches are difficult to deliver to the final users in cloud computing environment for its openness and multi-tenancy features. This paper described image failure propagation model for the first time and proposed a promoting mechanism based on pub/sub computing paradigm to combat with the patching delivery problem.

Le Nhan Tam - One of the best experts on this subject based on the ideXlab platform.

  • Model-Driven Software Engineering for virtual machine images Provisioning in Cloud Computing
    HAL CCSD, 2013
    Co-Authors: Le Nhan Tam
    Abstract:

    The Cloud Computing Infastructure-as-a-Service (IaaS) layer provides a service for on demand virtual machine images (VMIs) deployment. This service provides a flexible platform for cloud users to develop, deploy, and test their applications. The deployment of a VMI typically involves booting the image, installing and configuring the software packages. In the traditional approach, when a cloud user requests a new platform, the cloud provider selects an appropriate template image for cloning and deploying on the cloud nodes. The template image contains pre-installed software packages. If it does not fit the requirements, then it will be customized or the new one will be created from scratch to fit the request. In the context of cloud service management, the traditional approach faces the difficult issues of handling the complexity of interdependency between software packages, scaling and maintaining the deployed image at runtime. The cloud providers would like to automate this process to improve the performance of the VMIs provisioning process, and to give the cloud users more flexibility for selecting or creating the appropriate images while maximizing the benefits for providers intern of time, resources and operational cost. This thesis proposes an approach to manage the interdependency of the software packages, to model and automate the VMIs deployment process, to support the VMIs reconfiguration at runtime, called the Model-Driven approach. We particularly address the following challenges: (1) modeling the variability of virtual machine image configurations; (2) reducing the amount of data transfer through the network; (3) optimizing the power consumption of virtual machines; (4) easy-to-use for cloud users; (5) automating the deployment of VMIs; (6) supporting the scaling and reconfiguration of VMIs at runtime; (7) handling the complex deployment topology of VMIs. In our approach, we use Model-Driven Engineering techniques to model the abstraction representations of the VMI configurations, the deployment and the reconfiguration processes of virtual machine image. We consider the VMIs as a product line and use the feature models to represent the VMIs configurations. We also define the deployment, re-configuration processes and their factors (e.g. virtual machine images, software packages, platform, deployment topology, etc.) as the models. On the other hand, the Model-Driven approach relies on the high-level abstractions of the VMIs configuration and the VMIs deployment to make the management of virtual images in the provisioning process to be more flexible and easier than traditional approaches.La couche Infrastructure- as-a-Service (IaaS) de Cloud Computing offre un service de déploiement des images de machines virtuelles (VMIs) à la demande. Ce service fournit une plate-forme flexible pour les utilisateurs de cloud computing pour développer , déployer et tester leurs applications. Le déploiement d'une VMI implique généralement le démarrage de l'image, l'installation et la configuration des paquets de logiciels. Dans l'approche traditionnelle, lorsqu'un utilisateur de cloud demande une nouvelle plate-forme, le fournisseur de cloud sélectionne une image de modèle approprié pour cloner et déployer sur les nœuds de cloud. L'image de modèle contient des paquets de logiciel pré-installés. Si elle ne correspond pas aux exigences, alors elle sera personnalisée ou la nouvelle image sera créé à partir de zéro pour s'adapter à la demande. Dans le cadre de la gestion des services de cloud, l'approche traditionnelle face aux questions difficiles de la manipulation de la complexité de l'interdépendance entre les paquets de logiciel, mise à l'échelle et le maintien de l' image déployée à l'exécution. Les fournisseurs de cloud souhaitent automatiser ce processus pour améliorer la performance de processus d'approvisionnement des VMIs, et de donner aux utilisateurs de cloud plus de flexibilité pour la sélection ou la création des images appropriées, tout en maximisant les avantages pour les fournisseurs en termes de temps, de ressources et de coût opérationnel. Cette thèse propose une approche pour gérer l'interdépendance des paquets de logiciels, pour modéliser et automatiser le processus de déploiement VMIs, et pour soutenir la reconfiguration VMIS à l'exécution, appelée l'approche dirigée par les modèle (Model-Driven approach). Nous nous adressons particulièrement aux défis suivants: (1) la modélisation de la variabilité des configurations d'image de machine virtuelle , (2) la réduction la quantité de transfert de données à travers le réseau , (3) l'optimisation de la consommation d'énergie des machines virtuelles ; (4) la facilité à utiliser pour les utilisateurs de cloud; (5) l'automatisation du déploiement des VMIs; (6) le support de la mise à l'échelle et la reconfiguration de VMIS à l'exécution ; (7) la manipulation de la topologie de déploiement complexe des VMIs . Dans notre approche, nous utilisons des techniques d'ingénierie dirigée par les modèles pour modéliser les représentations d'abstraction des configurations de VMI, le déploiement et les processus de reconfiguration d'image de machine virtuelle. Nous considérons que les VMIS comme une gamme de produits et utiliser les modèles de caractère pour représenter les configurations de VMIs. Nous définissons également le déploiement , les processus de reconfiguration et leurs facteurs (par exemple: les images de machines virtuelles, les paquets de logiciel, la plate-forme, la topologie de déploiement, etc.) comme les modèles. D'autre part, l'approche dirigée par les modèles s'appuie sur les abstractions de haut niveau de la configuration de VMIs et le déploiement de VMIs pour rendre la gestion d'images virtuelles dans le processus d'approvisionnement pour être plus flexible et plus facile que les approches traditionnelles

  • L'ingénierie de logiciel dirigée par les modèles pour l'approvisionnement des images de machines virtuelles dans le cloud computing
    2013
    Co-Authors: Le Nhan Tam, Jezequel Jean-marc
    Abstract:

    La couche Infrastructure- as-a-Service (IaaS) de Cloud Computing offre un service de déploiement des images de machines virtuelles (VMIs) à la demande. Ce service fournit une plate-forme flexible pour les utilisateurs de cloud computing pour développer , déployer et tester leurs applications. Le déploiement d'une VMI implique généralement le démarrage de l'image, l'installation et la configuration des paquets de logiciels. Dans l'approche traditionnelle, lorsqu'un utilisateur de cloud demande une nouvelle plate-forme, le fournisseur de cloud sélectionne une image de modèle approprié pour cloner et déployer sur les nœuds de cloud . L'image de modèle contient des paquets de logiciel pré-installés. Si elle ne correspond pas aux exigences, alors elle sera personnalisée ou la nouvelle image sera créé à partir de zéro pour s'adapter à la demande. Dans le cadre de la gestion des services de cloud, l'approche traditionnelle face aux questions difficiles de la manipulation de la complexité de l'interdépendance entre les paquets de logiciel, mise à l'échelle et le maintien de l' image déployée à l'exécution. Les fournisseurs de cloud souhaitent automatiser ce processus pour améliorer la performance de processus d'approvisionnement des VMIs, et de donner aux utilisateurs de cloud plus de flexibilité pour la sélection ou la création des images appropriées, tout en maximisant les avantages pour les fournisseurs en termes de temps, de ressources et de coût opérationnel. Cette thèse propose une approche pour gérer l'interdépendance des paquets de logiciels, pour modéliser et automatiser le processus de déploiement VMIs, et pour soutenir la reconfiguration VMIS à l'exécution, appelée l'approche dirigée par les modèle (Model-Driven approach). Nous nous adressons particulièrement aux défis suivants: (1) la modélisation de la variabilité des configurations d'image de machine virtuelle, (2) la réduction la quantité de transfert de données à travers le réseau, (3) l'optimisation de la consommation d'énergie des machines virtuelles; (4) la facilité à utiliser pour les utilisateurs de cloud; (5) l'automatisation du déploiement des VMIs; (6) le support de la mise à l'échelle et la reconfiguration de VMIS à l'exécution; (7) la manipulation de la topologie de déploiement complexe des VMIs . Dans notre approche, nous utilisons des techniques d'ingénierie dirigée par les modèles pour modéliser les représentations d'abstraction des configurations de VMI, le déploiement et les processus de reconfiguration d'image de machine virtuelle. Nous considérons que les VMIS comme une gamme de produits et utiliser les modèles de caractère pour représenter les configurations de VMIs. Nous définissons également le déploiement, les processus de reconfiguration et leurs facteurs (par exemple: les images de machines virtuelles, les paquets de logiciel, la plate-forme, la topologie de déploiement, etc.) comme les modèles. D'autre part, l'approche dirigée par les modèles s'appuie sur les abstractions de haut niveau de la configuration de VMIs et le déploiement de VMIs pour rendre la gestion d'images virtuelles dans le processus d'approvisionnement pour être plus flexible et plus facile que les approches traditionnelles.The Cloud Computing Infastructure-as-a-Service (IaaS) layer provides a service for on demand virtual machine images (VMIs) deployment. This service provides a flexible platform for cloud users to develop, deploy, and test their applications. The deployment of a VMI typically involves booting the image, installing and configuring the software packages. In the traditional approach, when a cloud user requests a new platform, the cloud provider selects an appropriate template image for cloning and deploying on the cloud nodes. The template image contains pre-installed software packages. If it does not fit the requirements, then it will be customized or the new one will be created from scratch to fit the request. In the context of cloud service management, the traditional approach faces the difficult issues of handling the complexity of interdependency between software packages, scaling and maintaining the deployed image at runtime. The cloud providers would like to automate this process to improve the performance of the VMIs provisioning process, and to give the cloud users more flexibility for selecting or creating the appropriate images while maximizing the benefits for providers intern of time, resources and operational cost. This thesis proposes an approach to manage the interdependency of the software packages, to model and automate the VMIs deployment process, to support the VMIs reconfiguration at runtime, called the Model-Driven approach. We particularly address the following challenges: (1) modeling the variability of virtual machine image configurations; (2) reducing the amount of data transfer through the network; (3) optimizing the power consumption of virtual machines; (4) easy-to-use for cloud users; (5) automating the deployment of VMIs; (6) supporting the scaling and reconfiguration of VMIs at runtime; (7) handling the complex deployment topology of VMIs. In our approach, we use Model-Driven Engineering techniques to model the abstraction representations of the VMI configurations, the deployment and the reconfiguration processes of virtual machine image. We consider the VMIs as a product line and use the feature models to represent the VMIs configurations. We also define the deployment, re-configuration processes and their factors (e.g. virtual machine images, software packages, platform, deployment topology, etc.) as the models. On the other hand, the Model-Driven approach relies on the high-level abstractions of the VMIs configuration and the VMIs deployment to make the management of virtual images in the provisioning process to be more flexible and easier than traditional approaches.RENNES1-Bibl. électronique (352382106) / SudocSudocFranceF

  • L'ingénierie de logiciel dirigée par les modèles pour l'approvisionnement des images de machines virtuelles dans le cloud computing
    2013
    Co-Authors: Le Nhan Tam
    Abstract:

    La couche Infrastructure- as-a-Service (IaaS) de Cloud Computing offre un service de déploiement des images de machines virtuelles (VMIs) à la demande. Ce service fournit une plate-forme flexible pour les utilisateurs de cloud computing pour développer , déployer et tester leurs applications. Le déploiement d'une VMI implique généralement le démarrage de l'image, l'installation et la configuration des paquets de logiciels. Dans l'approche traditionnelle, lorsqu'un utilisateur de cloud demande une nouvelle plate-forme, le fournisseur de cloud sélectionne une image de modèle approprié pour cloner et déployer sur ​​les nœuds de cloud​​. L'image de modèle contient des paquets de logiciel pré-installés. Si elle ne correspond pas aux exigences, alors elle sera personnalisée ou la nouvelle image sera créé à partir de zéro pour s'adapter à la demande. Dans le cadre de la gestion des services de cloud, l'approche traditionnelle face aux questions difficiles de la manipulation de la complexité de l'interdépendance entre les paquets de logiciel, mise à l'échelle et le maintien de l' image déployée à l'exécution. Les fournisseurs de cloud souhaitent automatiser ce processus pour améliorer la performance de processus d'approvisionnement des VMIs, et de donner aux utilisateurs de cloud plus de flexibilité pour la sélection ou la création des images appropriées, tout en maximisant les avantages pour les fournisseurs en termes de temps, de ressources et de coût opérationnel. Cette thèse propose une approche pour gérer l'interdépendance des paquets de logiciels, pour modéliser et automatiser le processus de déploiement VMIs, et pour soutenir la reconfiguration VMIS à l'exécution, appelée l'approche dirigée par les modèle (Model-Driven approach). Nous nous adressons particulièrement aux défis suivants: (1) la modélisation de la variabilité des configurations d'image de machine virtuelle, (2) la réduction la quantité de transfert de données à travers le réseau, (3) l'optimisation de la consommation d'énergie des machines virtuelles; (4) la facilité à utiliser pour les utilisateurs de cloud; (5) l'automatisation du déploiement des VMIs; (6) le support de la mise à l'échelle et la reconfiguration de VMIS à l'exécution; (7) la manipulation de la topologie de déploiement complexe des VMIs . Dans notre approche, nous utilisons des techniques d'ingénierie dirigée par les modèles pour modéliser les représentations d'abstraction des configurations de VMI, le déploiement et les processus de reconfiguration d'image de machine virtuelle. Nous considérons que les VMIS comme une gamme de produits et utiliser les modèles de caractère pour représenter les configurations de VMIs. Nous définissons également le déploiement, les processus de reconfiguration et leurs facteurs (par exemple: les images de machines virtuelles, les paquets de logiciel, la plate-forme, la topologie de déploiement, etc.) comme les modèles. D'autre part, l'approche dirigée par les modèles s'appuie sur les abstractions de haut niveau de la configuration de VMIs et le déploiement de VMIs pour rendre la gestion d'images virtuelles dans le processus d'approvisionnement pour être plus flexible et plus facile que les approches traditionnelles.The Cloud Computing Infastructure-as-a-Service (IaaS) layer provides a service for on demand virtual machine images (VMIs) deployment. This service provides a flexible platform for cloud users to develop, deploy, and test their applications. The deployment of a VMI typically involves booting the image, installing and configuring the software packages. In the traditional approach, when a cloud user requests a new platform, the cloud provider selects an appropriate template image for cloning and deploying on the cloud nodes. The template image contains pre-installed software packages. If it does not fit the requirements, then it will be customized or the new one will be created from scratch to fit the request. In the context of cloud service management, the traditional approach faces the difficult issues of handling the complexity of interdependency between software packages, scaling and maintaining the deployed image at runtime. The cloud providers would like to automate this process to improve the performance of the VMIs provisioning process, and to give the cloud users more flexibility for selecting or creating the appropriate images while maximizing the benefits for providers intern of time, resources and operational cost. This thesis proposes an approach to manage the interdependency of the software packages, to model and automate the VMIs deployment process, to support the VMIs reconfiguration at runtime, called the Model-Driven approach. We particularly address the following challenges: (1) modeling the variability of virtual machine image configurations; (2) reducing the amount of data transfer through the network; (3) optimizing the power consumption of virtual machines; (4) easy-to-use for cloud users; (5) automating the deployment of VMIs; (6) supporting the scaling and reconfiguration of VMIs at runtime; (7) handling the complex deployment topology of VMIs. In our approach, we use Model-Driven Engineering techniques to model the abstraction representations of the VMI configurations, the deployment and the reconfiguration processes of virtual machine image. We consider the VMIs as a product line and use the feature models to represent the VMIs configurations. We also define the deployment, re-configuration processes and their factors (e.g. virtual machine images, software packages, platform, deployment topology, etc.) as the models. On the other hand, the Model-Driven approach relies on the high-level abstractions of the VMIs configuration and the VMIs deployment to make the management of virtual images in the provisioning process to be more flexible and easier than traditional approaches

Wenbo Zhang - One of the best experts on this subject based on the ideXlab platform.

  • clustering based acceleration for virtual machine image deduplication in the cloud environment
    Journal of Systems and Software, 2016
    Co-Authors: Wenbo Zhang, Zhenyu Zhang, Tao Wang, Tao Huang
    Abstract:

    Use a clustering-based classification to reduce the fingerprint search space.Take the image content layout into consideration during image deduplication.Propose periodical triggering and small group merging to facilitate VM deduplication.Evaluate the effectiveness, efficiency and robustness of the proposed method. More and more virtual machine (VM) images are continuously created in datacenters. Duplicated data segments may exist in such VM images, and it leads to a waste of storage resource. As a result, VM image deduplication is a common daily activity in datacenters. Our previous work Crab is such a product and it is on duty regularly in our datacenter.The size of VM images is large and the amount of VM images is huge, and it is inefficient and impractical to load massive VM image fingerprints into memory for a fast comparison to recognize duplicated segments. To address this issue, we in this paper propose a clustering-based acceleration method. It uses an improved k-means clustering to find images having high chances to contain duplicated segments. With such a candidate selection phase, only limited VM image candidate fingerprints are loaded into memory.We empirically evaluate the effectiveness, robustness, and complexity of the proposed system. Experimental results show that it significantly reduces the performance interference to hosting virtual machine with an acceptable increase in disk space usage, compared with existing deduplication methods.

  • a lightweight virtual machine image deduplication backup approach in cloud environment
    Computer Software and Applications Conference, 2014
    Co-Authors: Wenbo Zhang, Jun Wei, Tao Huang
    Abstract:

    As most clouds are based on virtualization technology, more and more virtual machine images are created within data centers. Depending on the need of disaster recovery, the storage space used for backup would easily sprawl to a TB or PB level with the growth of images. Unfortunately, different images have a large amount of same data segments. Those duplicated data segments will lead to serious waste of storage resource. Although there is a lot of work focus on deduplication storage and could achieve a good result in removing duplicate copies, they are not very suitable for virtual machine image deduplication in a cloud environment. Because huge resource usage of deduplication operations could lead to serious performance interference to the hosting virtual machines. This paper propose a local deduplication method which can speed up the operation progress of virtual machine image deduplication and reduce the operation time. The method is based on an improved k-means clustering algorithm, which could classify the metadata of backup image to reduce the search space of index lookup and improve the index lookup performance. Experiments show that our approach is robust and effective. It can significantly reduce the performance interference to hosting virtual machine with an acceptable increase in disk space usage.

  • vm image update notification mechanism based on pub sub paradigm in cloud
    Asia-Pacific Symposium on Internetware, 2013
    Co-Authors: Wenbo Zhang, Jun Wei, Tao Huang
    Abstract:

    virtual machine image encapsulates the whole software stack including operating system, middleware, user application and other software products. Failure occurred in any layer of the software stack will be treated as image failure. However, virtual machine image with potential failures can be convert to template and spread to a wide range by means of template replication. And this paper refer to this phenomenon as "image failure propagation". Usually, patching is a widely adopted solution to resolve software failures. Nevertheless, virtual machine image patches are difficult to deliver to the final users in cloud computing environment for its openness and multi-tenancy features. This paper described image failure propagation model for the first time and proposed a promoting mechanism based on pub/sub computing paradigm to combat with the patching delivery problem.

Che Renhai - One of the best experts on this subject based on the ideXlab platform.

  • Hardware/software codesign for performance and lifetime enhancement in NAND-flash-based embedded storage systems
    The Hong Kong Polytechnic University, 2016
    Co-Authors: Che Renhai
    Abstract:

    PolyU Library Call No.: [THS] LG51 .H577P COMP 2016 Chenxvi, 121 pages :color illustrationsEmbedded systems (e.g. smartphones) have become an integral part in peoples' daily life. In the last several years, most of efforts in improving embedded systems have been focusing on enhancing the network performance and CPU speed. On the other hand, the performance of NAND-flash-based storage in embedded systems is stagnant, and has become one of the major performance bottlenecks in embedded systems. Even worse, the performance and lifetimes ofNAND flash memory are substantially degraded with the advent of multi-level cell and triple-level cell flash memory for increasing the capacity. In this thesis, we have addressed these issues from several aspects including the integration of the emerging hardware and the cross-layer software management for the optimization of the performance and lifetime. First, we focus on employing the self-healing NAND flash memory to improve the lifetime and performance. Researchers have recently discovered that heating can cause worn-out NAND flash cells to become reusable and greatly extend the lifetime of flash memory cells. However, the heating process consumes a substantial amount of power, and some fundamental changes are required for existing NAND flash management techniques. In particular, all existing wear-leveling techniques are based on the principle of evenly distributing writes and erases. For self-healing NAND flash, this may cause NAND flash cells to be worn out in a short period of time. Moreover, frequently healing these cells may drain the energy quickly in battery-driven mobile devices, which is defined as the concentrated heating problem. We propose a novel wear-leveling scheme called DHeating (Dispersed Heating) to address the problem. In DHeating, rather than evenly distributing writes and erases over a time period, write and erase operations are scheduled on a small number of flash memory cells at a time, so that these cells can be worn out and healed much earlier than other cells. In this way, we can avoid quick energy depletion caused by concentrated heating. In addition, the heating process takes several seconds and has become the new performance bottleneck. In order to address this issue, we propose a lazy heating repair scheme. The lazy heating repair scheme can ease the long time heating effect by delaying the heating operation and using the system idle time to repair. Furthermore, the flash memory's reliability becomes worse with the flash memory cells reaching the expected worn-out time. We propose an early heating strategy to solve the reliability problem. With the extended lifetime provided by self-healing, we can trade some lifetimes for reliability. The idea is to start the healing process earlier than the expected worn-out time. We evaluate our scheme based on an embedded platform. The experimental results show that the proposed scheme can effectively prolong the consecutive heating time interval, alleviate the long time heating effect, and enhance the reliability for self-healing flash memory.Second, we jointly optimize the NAND flash memory's lifetime and performance with the integration of NVMs. Novel NVMs (non-volatile memories), such as PCM (Phase Change Memory) and STT-RAM (Spin-TransferTorque Random Access Memory), can provide fast read/write operations. In this thesis, we propose a unified NVM/flash architecture to improve the I/O performance. A transparent scheme, vFlash (virtualized Flash), is also proposed to manage the unified architecture. Within vFlash, inter-app and intra-app techniques are proposed to optimize the application performance by exploiting the historical locality and I/O access patterns of applications. Since vFlash is on the bottom of the I/O stack, the application features willbe lost. Therefore, we also propose a cross-layer technique to transfer the application information from the application layer to the vFlash layer. The proposed scheme is evaluated based on an Android platform, and the experimental results show that the proposed scheme can effectively improve the I/O performance of mobile devices. Third, we study the problem of performance and lifetime enhancement in the mobile virtualization environment. Mobile virtualization introduces extra layers in software stacks, which leads to performance degradation. Especially, each I/O operation has to pass through several software layers to reach the NAND-flash-based storage systems. This thesis targets at optimizing I/O for mobile virtualization, since I/O becomes one of major performance bottlenecks that seriously affects the performance of mobile devices. Among all the I/O operations, a large percentage is updating metadata. Frequent updating metadata not only degrades overall I/O performance but also severely reduces flash memory lifetime. In this thesis, we propose a novel I/O optimization technique to identify the metadata of a guest file system which is stored in a VM (virtual machine) image file and frequently updated. Then, these metadata are stored in a small additional NVM (Non-Volatile Memory) which is faster and more endurable to greatly improve flash memory's performance and lifetime. To the best of our knowledge, this is the first work to identify the file system metadata from regular data in a guest OS VM image file under mobile virtualization. The proposed scheme is evaluated on a real hardware embedded platform. The experimental results show that the proposed techniques can improve write performance to 45.21% in mobile devices with virtualization.Department of ComputingPh.D., Department of Computing, The Hong Kong Polytechnic University, 2016Doctorat

  • Hardware/software codesign for performance and lifetime enhancement in NAND-flash-based embedded storage systems
    The Hong Kong Polytechnic University, 2016
    Co-Authors: Che Renhai
    Abstract:

    PolyU Library Call No.: [THS] LG51 .H577P COMP 2016 Chenxvi, 121 pages :color illustrationsEmbedded systems (e.g. smartphones) have become an integral part in peoples' daily life. In the last several years, most of efforts in improving embedded systems have been focusing on enhancing the network performance and CPU speed. On the other hand, the performance of NAND-flash-based storage in embedded systems is stagnant, and has become one of the major performance bottlenecks in embedded systems. Even worse, the performance and lifetimes ofNAND flash memory are substantially degraded with the advent of multi-level cell and triple-level cell flash memory for increasing the capacity. In this thesis, we have addressed these issues from several aspects including the integration of the emerging hardware and the cross-layer software management for the optimization of the performance and lifetime. First, we focus on employing the self-healing NAND flash memory to improve the lifetime and performance. Researchers have recently discovered that heating can cause worn-out NAND flash cells to become reusable and greatly extend the lifetime of flash memory cells. However, the heating process consumes a substantial amount of power, and some fundamental changes are required for existing NAND flash management techniques. In particular, all existing wear-leveling techniques are based on the principle of evenly distributing writes and erases. For self-healing NAND flash, this may cause NAND flash cells to be worn out in a short period of time. Moreover, frequently healing these cells may drain the energy quickly in battery-driven mobile devices, which is defined as the concentrated heating problem. We propose a novel wear-leveling scheme called DHeating (Dispersed Heating) to address the problem. In DHeating, rather than evenly distributing writes and erases over a time period, write and erase operations are scheduled on a small number of flash memory cells at a time, so that these cells can be worn out and healed much earlier than other cells. In this way, we can avoid quick energy depletion caused by concentrated heating. In addition, the heating process takes several seconds and has become the new performance bottleneck. In order to address this issue, we propose a lazy heating repair scheme. The lazy heating repair scheme can ease the long time heating effect by delaying the heating operation and using the system idle time to repair. Furthermore, the flash memory's reliability becomes worse with the flash memory cells reaching the expected worn-out time. We propose an early heating strategy to solve the reliability problem. With the extended lifetime provided by self-healing, we can trade some lifetimes for reliability. The idea is to start the healing process earlier than the expected worn-out time. We evaluate our scheme based on an embedded platform. The experimental results show that the proposed scheme can effectively prolong the consecutive heating time interval, alleviate the long time heating effect, and enhance the reliability for self-healing flash memory.Second, we jointly optimize the NAND flash memory's lifetime and performance with the integration of NVMs. Novel NVMs (non-volatile memories), such as PCM (Phase Change Memory) and STT-RAM (Spin-TransferTorque Random Access Memory), can provide fast read/write operations. In this thesis, we propose a unified NVM/flash architecture to improve the I/O performance. A transparent scheme, vFlash (virtualized Flash), is also proposed to manage the unified architecture. Within vFlash, inter-app and intra-app techniques are proposed to optimize the application performance by exploiting the historical locality and I/O access patterns of applications. Since vFlash is on the bottom of the I/O stack, the application features willbe lost. Therefore, we also propose a cross-layer technique to transfer the application information from the application layer to the vFlash layer. The proposed scheme is evaluated based on an Android platform, and the experimental results show that the proposed scheme can effectively improve the I/O performance of mobile devices. Third, we study the problem of performance and lifetime enhancement in the mobile virtualization environment. Mobile virtualization introduces extra layers in software stacks, which leads to performance degradation. Especially, each I/O operation has to pass through several software layers to reach the NAND-flash-based storage systems. This thesis targets at optimizing I/O for mobile virtualization, since I/O becomes one of major performance bottlenecks that seriously affects the performance of mobile devices. Among all the I/O operations, a large percentage is updating metadata. Frequent updating metadata not only degrades overall I/O performance but also severely reduces flash memory lifetime. In this thesis, we propose a novel I/O optimization technique to identify the metadata of a guest file system which is stored in a VM (virtual machine) image file and frequently updated. Then, these metadata are stored in a small additional NVM (Non-Volatile Memory) which is faster and more endurable to greatly improve flash memory's performance and lifetime. To the best of our knowledge, this is the first work to identify the file system metadata from regular data in a guest OS VM image file under mobile virtualization. The proposed scheme is evaluated on a real hardware embedded platform. The experimental results show that the proposed techniques can improve write performance to 45.21% in mobile devices with virtualization.Department of ComputingPh.D., Department of Computing, The Hong Kong Polytechnic University, 2016Doctoratepublished_fina

Chunqiang Tang - One of the best experts on this subject based on the ideXlab platform.

  • FVD: a High-Performance virtual machine image Format for Cloud. This is the longer version of the USENIX’11 paper with the same title, available at https://researcher.ibm.com/ researcher/view project.php?id=1852
    2013
    Co-Authors: Chunqiang Tang
    Abstract:

    Fast virtual Disk (FVD) is a new virtual machine (VM) image format and the corresponding block device driver developed for QEMU. QEMU does I/O emulation for multiple hypervisors, including KVM, Xen-HVM, and virtualBox. FVD is a holistic solution for both Cloud and non-Cloud environments. Its feature set includes flexible configurability, storage thin provisioning without a host file system, compact image, internal snapshot, encryption, copy-on-write, copy-on-read, and adaptive prefetching. The last two features enable instant VM creation and instant VM migration, even if the VM image is stored on direct-attached storage. As its name indicates, FVD is fast. Experiments show that the throughput of FVD is 249 % higher than that of QCOW2 when using the Post-Mark benchmark to create files.

  • fvd a high performance virtual machine image format for cloud
    USENIX Annual Technical Conference, 2011
    Co-Authors: Chunqiang Tang
    Abstract:

    Fast virtual Disk (FVD) is a new virtual machine (VM) image format and the corresponding block device driver developed for QEMU. QEMU does I/O emulation for multiple hypervisors, including KVM, Xen-HVM, and virtualBox. FVD is a holistic solution for both Cloud and non-Cloud environments. Its feature set includes flexible configurability, storage thin provisioning without a host file system, compact image, internal snapshot, encryption, copy-on-write, copy-on-read, and adaptive prefetching. The last two features enable instant VM creation and instant VM migration, even if the VM image is stored on direct-attached storage. As its name indicates, FVD is fast. Experiments show that the throughput of FVD is 249% higher than that of QCOW2 when using the Post-Mark benchmark to create files.