Virtual Machine Technology

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 171 Experts worldwide ranked by ideXlab platform

Rajkumar Buyya - One of the best experts on this subject based on the ideXlab platform.

  • SLA-based Virtual Machine management for heterogeneous workloads in a cloud datacenter
    Journal of Network and Computer Applications, 2014
    Co-Authors: Saurabh Kumar Garg, Srinivasa K. Gopalaiyengar, Adel Nadjaran Toosi, Rajkumar Buyya
    Abstract:

    Efficient provisioning of resources is a challenging problem in cloud computing environments due to its dynamic nature and the need for supporting heterogeneous applications. Even though VM (Virtual Machine) Technology allows several workloads to run concurrently and to use a shared infrastructure, still it does not guarantee application performance. Thus, currently cloud datacenter providers either do not offer any performance guarantee or prefer static VM allocation over dynamic, which leads to inefficient utilization of resources. Moreover, the workload may have different QoS (Quality Of Service) requirements due to the execution of different types of applications such as HPC and web, which makes resource provisioning much harder. Earlier work either concentrate on single type of SLAs (Service Level Agreements) or resource usage patterns of applications, such as web applications, leading to inefficient utilization of datacenter resources. In this paper, we tackle the resource allocation problem within a datacenter that runs different types of application workloads, particularly non-interactive and transactional applications. We propose an admission control and scheduling mechanism which not only maximizes the resource utilization and profit, but also ensures that the QoS requirements of users are met as specified in SLAs. In our experimental study, we found that it is important to be aware of different types of SLAs along with applicable penalties and the mix of workloads for better resource provisioning and utilization of datacenters. The proposed mechanism provides substantial improvement over static server consolidation and reduces SLA violations.

  • Cloud Resource Provisioning to Extend the Capacity of Local Resources in the Presence of Failures
    2012 IEEE 14th International Conference on High Performance Computing and Communication & 2012 IEEE 9th International Conference on Embedded Software , 2012
    Co-Authors: Bahman Javadi, Parimala Thulasiraman, Rajkumar Buyya
    Abstract:

    In this paper, we investigate Cloud computing resource provisioning to extend the computing capacity of local clusters in the presence of failures. We consider three steps in the resource provisioning including resource brokering, dispatch sequences, and scheduling. The proposed brokering strategy is based on the stochastic analysis of routing in distributed parallel queues and takes into account the response time of the Cloud provider and the local cluster while considering computing cost of both sides. Moreover, we propose dispatching with probabilistic and deterministic sequences to redirect requests to the resource providers. We also incorporate check pointing in some well-known scheduling algorithms to provide a fault-tolerant environment. We propose two cost-aware and failure-aware provisioning policies that can be utilized by an organization that operates a cluster managed by Virtual Machine Technology and seeks to use resources from a public Cloud provider. Simulation results demonstrate that the proposed policies improve the response time of users' requests by a factor of 4.10 under a moderate load with a limited cost on a public Cloud.

  • a cost benefit analysis of using cloud computing to extend the capacity of clusters
    Cluster Computing, 2010
    Co-Authors: Marcos Dias De Assuncao, Alexandre Di Costanzo, Rajkumar Buyya
    Abstract:

    In this paper, we investigate the benefits that organisations can reap by using "Cloud Computing" providers to augment the computing capacity of their local infrastructure. We evaluate the cost of seven scheduling strategies used by an organisation that operates a cluster managed by Virtual Machine Technology and seeks to utilise resources from a remote Infrastructure as a Service (IaaS) provider to reduce the response time of its user requests. Requests for Virtual Machines are submitted to the organisation's cluster, but additional Virtual Machines are instantiated in the remote provider and added to the local cluster when there are insufficient resources to serve the users' requests. Naive scheduling strategies can have a great impact on the amount paid by the organisation for using the remote resources, potentially increasing the overall cost with the use of IaaS. Therefore, in this work we investigate seven scheduling strategies that consider the use of resources from the "Cloud", to understand how these strategies achieve a balance between performance and usage cost, and how much they improve the requests' response times.

  • evaluating the cost benefit of using cloud computing to extend the capacity of clusters
    High Performance Distributed Computing, 2009
    Co-Authors: Marcos Dias De Assuncao, Alexandre Di Costanzo, Rajkumar Buyya
    Abstract:

    In this paper, we investigate the benefits that organisations can reap by using "Cloud Computing" providers to augment the computing capacity of their local infrastructure. We evaluate the cost of six scheduling strategies used by an organisation that operates a cluster managed by Virtual Machine Technology and seeks to utilise resources from a remote Infrastructure as a Service (IaaS) provider to reduce the response time of its user requests. Requests for Virtual Machines are submitted to the organisation's cluster, but additional Virtual Machines are instantiated in the remote provider and added to the local cluster when there are insufficient resources to serve the users' requests. Naive scheduling strategies can have a great impact on the amount paid by the organisation for using the remote resources, potentially increasing the overall cost with the use of IaaS. Therefore, in this work we investigate six scheduling strategies that consider the use of resources from the "Cloud", to understand how these strategies achieve a balance between performance and usage cost, and how much they improve the requests' response times.

Michael Schmitt - One of the best experts on this subject based on the ideXlab platform.

  • tele lab it security an architecture for interactive lessons for security education
    Technical Symposium on Computer Science Education, 2004
    Co-Authors: Ji Hu, Christoph Meinel, Michael Schmitt
    Abstract:

    IT security education is an important activity in computer science education. The broad range of existing security threats makes it necessary to teach students the principles of IT security as well as to let them gain hands-on experience. In order to enable students to practice IT security anytime anywhere, a novel tutoring system is being developed at the University of Trier, Germany, which allows them to get familiar with security technologies and tools via the Internet. Based on Virtual Machine Technology, users are able to perform exercises on a Linux system instead of in a restricted simulation environment. This paper describes the user interface of the Tele-Lab IT Security, its system architecture and its functional components.

Sophia Antipolis - One of the best experts on this subject based on the ideXlab platform.

  • Effective and Efficient Malware Detection at the End Host
    System, 2009
    Co-Authors: Clemens Kolbitsch, U C Santa Barbara, Xiaoyong Zhou, Paolo Milani Comparetti, Christopher Kruegel, Engin Kirda, Xiaofeng Wang, Sophia Antipolis
    Abstract:

    Malware is one of the most serious security threats on the Internet today. In fact, most Internet problems such as spam e-mails and denial of service attacks have malware as their underlying cause. That is, computers that are compromised with malware are often networked together to form botnets, and many attacks are launched using these malicious, attacker-controlled networks. With the increasing significance of malware in Internet attacks, much research has concentrated on developing techniques to collect, study, and mitigate malicious code. Without doubt, it is important to collect and study malware found on the Internet. However, it is even more important to develop mitigation and detection techniques based on the insights gained from the analysis work. Unfortunately, current host-based detection approaches (i.e., anti-virus software) suffer from ineffective detection models. These models concentrate on the features of a specific malware instance, and are often easily evadable by obfuscation or polymorphism. Also, detectors that check for the presence of a sequence of system calls exhibited by a malware instance are often evadable by system call reordering. In order to address the shortcomings of ineffectivemodels, several dynamic detection approaches have been proposed that aim to identify the behavior exhibited by a malware family. Although promising, these approaches are unfortunately too slow to be used as real-time detectors on the end host, and they often require cumbersome Virtual Machine Technology. In this paper, we propose a novel malware detection approach that is both effective and efficient, and thus, can be used to replace or complement traditional anti-virus software at the end host. Our approach first analyzes a malware program in a controlled environment to build a model that characterizes its behavior. Such models describe the information flows between the system calls essential to the malwares mission, and therefore, cannot be easily evaded by simple obfuscation or polymorphic techniques. Then, we extract the program slices responsible for such information flows. For detection, we execute these slices to match our models against the runtime behavior of an unknown program. Our experiments show that our approach can effectively detect running malicious code on an end users host with a small overhead.

Greg Hutchins - One of the best experts on this subject based on the ideXlab platform.

  • fast transparent migration for Virtual Machines
    USENIX Annual Technical Conference, 2005
    Co-Authors: Michael Nelson, Greg Hutchins
    Abstract:

    This paper describes the design and implementation of a system that uses Virtual Machine Technology [1] to provide fast, transparent application migration. This is the first system that can migrate unmodified applications on unmodified mainstream Intel x86-based operating system, including Microsoft Windows, Linux, Novell NetWare and others. Neither the application nor any clients communicating with the application can tell that the application has been migrated. Experimental measurements show that for a variety of workloads, application downtime caused by migration is less than a second.

Guanfei Guo - One of the best experts on this subject based on the ideXlab platform.

  • fast service migration method based on Virtual Machine Technology for mec
    IEEE Internet of Things Journal, 2019
    Co-Authors: Xianyu Meng, Guanfei Guo
    Abstract:

    In the era of the Internet of Things (IoT), mobile edge computing (MEC) has become an effective solution to meet the energy efficiency and delay requirements of IoT applications. In MEC systems, tasks can be offloaded from lightweight mobile devices to edge nodes that are nearer to the users. To improve user experience, we combine remote loading and redirection to accelerate the service migration. By tracing historic access patterns, the proposed method first generates a loading request list that locates the core codes in the image file of service applications for booting. The core codes are then be prefetched and cached automatically. Furthermore, to avoid the potential UI lagging caused by incomplete service migration, edge nodes can continuously load the remaining codes in the image file. Once the image file is completely migrated, the file will be reconstructed. The running Virtual Machine (VM) then switches data access to the merged image file. Experiments show that this method can observably reduce the loading time of a VM-based application.