Transactional Data

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 13236 Experts worldwide ranked by ideXlab platform

Thomas Neumann - One of the best experts on this subject based on the ideXlab platform.

  • scyper elastic olap throughput on Transactional Data
    Proceedings of the Second Workshop on Data Analytics in the Cloud, 2013
    Co-Authors: Tobias Muhlbauer, Wolf Rödiger, Alfons Kemper, Angelika Reiser, Thomas Neumann
    Abstract:

    Ever increasing main memory sizes and the advent of multi-core parallel processing have fostered the development of in-core Databases. Even the Transactional Data of large enterprises can be retained in-memory on a single server. Modern in-core Databases like our HyPer system achieve best-of-breed OLTP throughput that is sufficient for the lion's share of applications. Remaining server resources are used for OLAP query processing on the latest Transactional Data, i.e., real-time business analytics. While OLTP performance of a single server is sufficient, an increasing demand for OLAP throughput can only be satisfied economically by a scale-out. In this work we present ScyPer, a Scale-out of our HyPer main memory Database system that horizontally scales out on shared-nothing hardware. With ScyPer we aim at (i) sustaining the superior OLTP throughput of a single HyPer server, and (ii) providing elastic OLAP throughput by provisioning additional servers on-demand, e.g., in the Cloud.

  • Transaction Processing in the Hybrid OLTP&OLAP Main-Memory Database System HyPer
    IEEE Data Engineering Bulletin, 2013
    Co-Authors: Alfons Kemper, Jan Finis, Florian Funke, Henrik Mühe, Tobias Mülbauer, Thomas Neumann, Viktor Leis, Wolf Rödiger
    Abstract:

    Two emerging hardware trends have re-initiated the development of in-core Database systems: ever increasing main-memory capacities and vast multi-core parallel processing power. Main-memory ca-pacities of several TB allow to retain all Transactional Data of even the largest applications in-memory on one (or a few) servers. The vast computational power in combination with low Data management overhead yields unprecedented transaction performance which allows to push transaction processing (away from application servers) into the Database server and still " leaves room " for additional query processing directly on the Transactional Data. Thereby, the often postulated goal of real-time business intelligence, where decision makers have access to the latest version of the Transactional state, becomes feasible. In this paper we will survey the HyPerScript transaction programming language, the main-memory indexing technique ART, which is decisive for high transaction processing performance, and HyPer's transaction management that allows heterogeneous workloads consisting of short pre-canned transactions, OLAP-style queries, and long interactive transactions.

  • compacting Transactional Data in hybrid oltp olap Databases
    arXiv: Databases, 2012
    Co-Authors: Florian Funke, Alfons Kemper, Thomas Neumann
    Abstract:

    Growing main memory sizes have facilitated Database management systems that keep the entire Database in main memory. The drastic performance improvements that came along with these in-memory systems have made it possible to reunite the two areas of online transaction processing (OLTP) and online analytical processing (OLAP): An emerging class of hybrid OLTP and OLAP Database systems allows to process analytical queries directly on the Transactional Data. By offering arbitrarily current snapshots of the Transactional Data for OLAP, these systems enable real-time business intelligence. Despite memory sizes of several Terabytes in a single commodity server, RAM is still a precious resource: Since free memory can be used for intermediate results in query processing, the amount of memory determines query performance to a large extent. Consequently, we propose the compaction of memory-resident Databases. Compaction consists of two tasks: First, separating the mutable working set from the immutable "frozen" Data. Second, compressing the immutable Data and optimizing it for efficient, memory-consumption-friendly snapshotting. Our approach reorganizes and compresses Transactional Data online and yet hardly affects the mission-critical OLTP throughput. This is achieved by unburdening the OLTP threads from all additional processing and performing these tasks asynchronously.

  • Compacting Transactional Data in Hybrid OLTP & OLAP Databases
    arXiv: Databases, 2012
    Co-Authors: Florian Funke, Alfons Kemper, Thomas Neumann
    Abstract:

    Growing main memory sizes have facilitated Database management systems that keep the entire Database in main memory. The drastic performance improvements that came along with these in-memory systems have made it possible to reunite the two areas of online transaction processing (OLTP) and online analytical processing (OLAP): An emerging class of hybrid OLTP and OLAP Database systems allows to process analytical queries directly on the Transactional Data. By offering arbitrarily current snapshots of the Transactional Data for OLAP, these systems enable real-time business intelligence. Despite memory sizes of several Terabytes in a single commodity server, RAM is still a precious resource: Since free memory can be used for intermediate results in query processing, the amount of memory determines query performance to a large extent. Consequently, we propose the compaction of memory-resident Databases. Compaction consists of two tasks: First, separating the mutable working set from the immutable "frozen" Data. Second, compressing the immutable Data and optimizing it for efficient, memory-consumption-friendly snapshotting. Our approach reorganizes and compresses Transactional Data online and yet hardly affects the mission-critical OLTP throughput. This is achieved by unburdening the OLTP threads from all additional processing and performing these tasks asynchronously.

  • Compacting Transactional Data in hybrid OLTP&OLAP Databases
    Proceedings of the VLDB Endowment, 2012
    Co-Authors: Florian Funke, Alfons Kemper, Thomas Neumann
    Abstract:

    Growing main memory sizes have facilitated Database management systems that keep the entire Database in main memory. The drastic performance improvements that came along with these in-memory systems have made it possible to reunite the two areas of online transaction processing (OLTP) and online analytical processing (OLAP): An emerging class of hybrid OLTP and OLAP Database systems allows to process analytical queries directly on the Transactional Data. By offering arbitrarily current snapshots of the Transactional Data for OLAP, these systems enable real-time business intelligence. Despite memory sizes of several Terabytes in a single commodity server, RAM is still a precious resource: Since free memory can be used for intermediate results in query processing, the amount of memory determines query performance to a large extent. Consequently, we propose the compaction of memory-resident Databases. Compaction consists of two tasks: First, separating the mutable working set from the immutable "frozen" Data. Second, compressing the immutable Data and optimizing it for efficient, memory-consumption-friendly snapshotting. Our approach reorganizes and compresses Transactional Data online and yet hardly affects the mission-critical OLTP throughput. This is achieved by unburdening the OLTP threads from all additional processing and performing these tasks asynchronously.

Francesco Quaglia - One of the best experts on this subject based on the ideXlab platform.

  • Transactional auto scaler elastic scaling of replicated in memory Transactional Data grids
    ACM Transactions on Autonomous and Adaptive Systems, 2014
    Co-Authors: Diego Didona, Paolo Romano, Sebastiano Peluso, Francesco Quaglia
    Abstract:

    In this article, we introduce TAS (Transactional Auto Scaler), a system for automating the elastic scaling of replicated in-memory Transactional Data grids, such as NoSQL Data stores or Distributed Transactional Memories. Applications of TAS range from online self-optimization of in-production applications to the automatic generation of QoS/cost-driven elastic scaling policies, as well as to support for what-if analysis on the scalability of Transactional applications. In this article, we present the key innovation at the core of TAS, namely, a novel performance forecasting methodology that relies on the joint usage of analytical modeling and machine learning. By exploiting these two classically competing approaches in a synergic fashion, TAS achieves the best of the two worlds, namely, high extrapolation power and good accuracy, even when faced with complex workloads deployed over public cloud infrastructures. We demonstrate the accuracy and feasibility of TAS’s performance forecasting methodology via an extensive experimental study based on a fully fledged prototype implementation integrated with a popular open-source in-memory Transactional Data grid (Red Hat’s Infinispan) and industry-standard benchmarks generating a breadth of heterogeneous workloads.

  • auto tuning of cloud based in memory Transactional Data grids via machine learning
    IEEE International Conference on Cloud Computing Technology and Science, 2012
    Co-Authors: Pierangelo Di Sanzo, Bruno Ciciani, Diego Rughetti, Francesco Quaglia
    Abstract:

    In-memory Transactional Data grids have revealed extremely suited for cloud based environments, given that they well fit elasticity requirements imposed by the pay-as-you-go cost model. Particularly, the non-reliance on stablestorage devices simplifies dynamic resize of these platforms, which typically only involves setting up (or shutting down) some Data-cache instance. On the other hand, defifining the well suited amount of cache servers to be deployed, and the degree of replication of slices of Data, in order to optimize reliability/availability and performance tradeoffs, is far frombeing a trivial task. As a example, scaling up/down the size of the underlying infrastructure might give rise to scarcely predictable secondary effects on the side of the synchronization protocol adopted to guarantee Data consistency while supporting Transactional accesses. In this paper we investigate on the usage of machine learning approaches with the aim at providing a means for automatically tuning the Data grid confifiguration,  which is achieved via dynamic selection of both the well suited amount of cache servers, and the well suited degree of replication of the Data-objects. The final target is to determine confifigurations that are able to guarantee specifific throughput or latency values (such as those established by some SLA), under some specifific workload profifile/intensity, while minimizing at the same time the cost for the cloud infrastructure. Our proposal has been integrated within an operating environment relying on the well known Infinispan Data grid, namely a mainstream open source product by theRed Had JBoss division. Some experimental Data are also provided supporting the effectiveness of our proposal, which have been achieved by deploying the Data platform on top of Amazon EC2.

  • Transactional auto scaler elastic scaling of in memory Transactional Data grids
    International Conference on Autonomic Computing, 2012
    Co-Authors: Diego Didona, Paolo Romano, Sebastiano Peluso, Francesco Quaglia
    Abstract:

    In this paper we introduce TAS (Transactional Auto Scaler), a system for automating elastic-scaling of in-memory Transactional Data grids, such as NoSQL Data stores or Distributed Transactional Memories. Applications of TAS range from on-line self-optimization of in-production applications to automatic generation of QoS/cost driven elastic scaling policies, and support for what-if analysis on the scalability of Transactional applications. The key innovation at the core of TAS is a novel performance forecasting methodology that relies on the joint usage of analytical modeling and machine-learning. By exploiting these two, classically competing, methodologies in a synergic fashion, TAS achieves the best of the two worlds, namely high extrapolation power and good accuracy even when faced with complex workloads deployed over public cloud infrastructures. We demonstrate the accuracy and feasibility of TAS via an extensive experimental study based on a fully fledged prototype implementation, integrated with a popular open-source Transactional in-memory Data store (Red Hat's Infinispan), and industry-standard benchmarks generating a breadth of heterogeneous workloads.

  • automated workload characterization in cloud based Transactional Data grids
    International Parallel and Distributed Processing Symposium, 2012
    Co-Authors: Bruno Ciciani, Diego Didona, Sebastiano Peluso, Francesco Quaglia, Pierangelo Di Sanzo, Roberto Palmieri, Paolo Romano
    Abstract:

    Cloud computing represents a cost-effective paradigm to deploy a wide class of large-scale distributed applications, for which the pay-per-use model combined with automatic resource provisioning promise to reduce the cost of dependability and scalability. However, a key challenge to be addressed to materialize the advantages promised by Cloud computing is the design of effective auto-scaling and self-tuning mechanisms capable of ensuring pre-determined QoS levels at minimum cost in face of changing workload conditions. This is one of the keys goals that are being pursued by the Cloud-TM project, a recent EU project that is developing a novel, self-optimizing Transactional Data platform for the cloud. In this paper we present the key design choices underlying the development of Cloud-TM's Workload Analyzer (WA), a crucial component of the Cloud-TM platform that is change of three key functionalities: aggregating, filtering and correlating the streams of statistical Data gathered from the various nodes of the Cloud-TM platform, building detailed workload profiles of applications deployed on the Cloud-TM platform, characterizing their present and future demands in terms of both logical (i.e. Data) and physical (e.g. hardware-related) resources, triggering alerts in presence of violations (or risks of future violations) of pre-determined SLAs.

Diego Didona - One of the best experts on this subject based on the ideXlab platform.

  • Transactional auto scaler elastic scaling of replicated in memory Transactional Data grids
    ACM Transactions on Autonomous and Adaptive Systems, 2014
    Co-Authors: Diego Didona, Paolo Romano, Sebastiano Peluso, Francesco Quaglia
    Abstract:

    In this article, we introduce TAS (Transactional Auto Scaler), a system for automating the elastic scaling of replicated in-memory Transactional Data grids, such as NoSQL Data stores or Distributed Transactional Memories. Applications of TAS range from online self-optimization of in-production applications to the automatic generation of QoS/cost-driven elastic scaling policies, as well as to support for what-if analysis on the scalability of Transactional applications. In this article, we present the key innovation at the core of TAS, namely, a novel performance forecasting methodology that relies on the joint usage of analytical modeling and machine learning. By exploiting these two classically competing approaches in a synergic fashion, TAS achieves the best of the two worlds, namely, high extrapolation power and good accuracy, even when faced with complex workloads deployed over public cloud infrastructures. We demonstrate the accuracy and feasibility of TAS’s performance forecasting methodology via an extensive experimental study based on a fully fledged prototype implementation integrated with a popular open-source in-memory Transactional Data grid (Red Hat’s Infinispan) and industry-standard benchmarks generating a breadth of heterogeneous workloads.

  • Self-Tuning Transactional Data grids: The Cloud-TM Approach
    Proceedings - IEEE 3rd Symposium on Network Cloud Computing and Applications NCCA 2014, 2014
    Co-Authors: Diego Didona, Paolo Romano
    Abstract:

    In this paper we focus on the problem of self-tuning distributed Transactional cloud Data stores by presenting an overview of the autonomic mechanisms integrated in the Cloud-TM platform, a Transactional cloud Data store developed in the context of a recent European project. Cloud-TM takes a holistic approach to self-tuning and elastic scaling, treating them as strongly intertwined problems with the ultimate goals of i) achieving optimal efficiency at any scale of the platform, and ii) minimizing resource consumption in presence of varying workloads. From a methodological perspective, this is achieved by relying on the innovative idea of exploiting the diversity of different modelling approaches, including analytical models, machine-learning and simulations. By employing these modelling techniques in synergy, the Cloud-TM platform can dynamically optimize the underlying distributed Data store over a number of dimensions, including its scale, the strategy it adopts to distribute and replicate Data among the platforms' nodes, as well as its replication protocol. © 2014 IEEE.

  • Transactional auto scaler elastic scaling of in memory Transactional Data grids
    International Conference on Autonomic Computing, 2012
    Co-Authors: Diego Didona, Paolo Romano, Sebastiano Peluso, Francesco Quaglia
    Abstract:

    In this paper we introduce TAS (Transactional Auto Scaler), a system for automating elastic-scaling of in-memory Transactional Data grids, such as NoSQL Data stores or Distributed Transactional Memories. Applications of TAS range from on-line self-optimization of in-production applications to automatic generation of QoS/cost driven elastic scaling policies, and support for what-if analysis on the scalability of Transactional applications. The key innovation at the core of TAS is a novel performance forecasting methodology that relies on the joint usage of analytical modeling and machine-learning. By exploiting these two, classically competing, methodologies in a synergic fashion, TAS achieves the best of the two worlds, namely high extrapolation power and good accuracy even when faced with complex workloads deployed over public cloud infrastructures. We demonstrate the accuracy and feasibility of TAS via an extensive experimental study based on a fully fledged prototype implementation, integrated with a popular open-source Transactional in-memory Data store (Red Hat's Infinispan), and industry-standard benchmarks generating a breadth of heterogeneous workloads.

  • automated workload characterization in cloud based Transactional Data grids
    International Parallel and Distributed Processing Symposium, 2012
    Co-Authors: Bruno Ciciani, Diego Didona, Sebastiano Peluso, Francesco Quaglia, Pierangelo Di Sanzo, Roberto Palmieri, Paolo Romano
    Abstract:

    Cloud computing represents a cost-effective paradigm to deploy a wide class of large-scale distributed applications, for which the pay-per-use model combined with automatic resource provisioning promise to reduce the cost of dependability and scalability. However, a key challenge to be addressed to materialize the advantages promised by Cloud computing is the design of effective auto-scaling and self-tuning mechanisms capable of ensuring pre-determined QoS levels at minimum cost in face of changing workload conditions. This is one of the keys goals that are being pursued by the Cloud-TM project, a recent EU project that is developing a novel, self-optimizing Transactional Data platform for the cloud. In this paper we present the key design choices underlying the development of Cloud-TM's Workload Analyzer (WA), a crucial component of the Cloud-TM platform that is change of three key functionalities: aggregating, filtering and correlating the streams of statistical Data gathered from the various nodes of the Cloud-TM platform, building detailed workload profiles of applications deployed on the Cloud-TM platform, characterizing their present and future demands in terms of both logical (i.e. Data) and physical (e.g. hardware-related) resources, triggering alerts in presence of violations (or risks of future violations) of pre-determined SLAs.

Alfons Kemper - One of the best experts on this subject based on the ideXlab platform.

  • scyper elastic olap throughput on Transactional Data
    Proceedings of the Second Workshop on Data Analytics in the Cloud, 2013
    Co-Authors: Tobias Muhlbauer, Wolf Rödiger, Alfons Kemper, Angelika Reiser, Thomas Neumann
    Abstract:

    Ever increasing main memory sizes and the advent of multi-core parallel processing have fostered the development of in-core Databases. Even the Transactional Data of large enterprises can be retained in-memory on a single server. Modern in-core Databases like our HyPer system achieve best-of-breed OLTP throughput that is sufficient for the lion's share of applications. Remaining server resources are used for OLAP query processing on the latest Transactional Data, i.e., real-time business analytics. While OLTP performance of a single server is sufficient, an increasing demand for OLAP throughput can only be satisfied economically by a scale-out. In this work we present ScyPer, a Scale-out of our HyPer main memory Database system that horizontally scales out on shared-nothing hardware. With ScyPer we aim at (i) sustaining the superior OLTP throughput of a single HyPer server, and (ii) providing elastic OLAP throughput by provisioning additional servers on-demand, e.g., in the Cloud.

  • Transaction Processing in the Hybrid OLTP&OLAP Main-Memory Database System HyPer
    IEEE Data Engineering Bulletin, 2013
    Co-Authors: Alfons Kemper, Jan Finis, Florian Funke, Henrik Mühe, Tobias Mülbauer, Thomas Neumann, Viktor Leis, Wolf Rödiger
    Abstract:

    Two emerging hardware trends have re-initiated the development of in-core Database systems: ever increasing main-memory capacities and vast multi-core parallel processing power. Main-memory ca-pacities of several TB allow to retain all Transactional Data of even the largest applications in-memory on one (or a few) servers. The vast computational power in combination with low Data management overhead yields unprecedented transaction performance which allows to push transaction processing (away from application servers) into the Database server and still " leaves room " for additional query processing directly on the Transactional Data. Thereby, the often postulated goal of real-time business intelligence, where decision makers have access to the latest version of the Transactional state, becomes feasible. In this paper we will survey the HyPerScript transaction programming language, the main-memory indexing technique ART, which is decisive for high transaction processing performance, and HyPer's transaction management that allows heterogeneous workloads consisting of short pre-canned transactions, OLAP-style queries, and long interactive transactions.

  • compacting Transactional Data in hybrid oltp olap Databases
    arXiv: Databases, 2012
    Co-Authors: Florian Funke, Alfons Kemper, Thomas Neumann
    Abstract:

    Growing main memory sizes have facilitated Database management systems that keep the entire Database in main memory. The drastic performance improvements that came along with these in-memory systems have made it possible to reunite the two areas of online transaction processing (OLTP) and online analytical processing (OLAP): An emerging class of hybrid OLTP and OLAP Database systems allows to process analytical queries directly on the Transactional Data. By offering arbitrarily current snapshots of the Transactional Data for OLAP, these systems enable real-time business intelligence. Despite memory sizes of several Terabytes in a single commodity server, RAM is still a precious resource: Since free memory can be used for intermediate results in query processing, the amount of memory determines query performance to a large extent. Consequently, we propose the compaction of memory-resident Databases. Compaction consists of two tasks: First, separating the mutable working set from the immutable "frozen" Data. Second, compressing the immutable Data and optimizing it for efficient, memory-consumption-friendly snapshotting. Our approach reorganizes and compresses Transactional Data online and yet hardly affects the mission-critical OLTP throughput. This is achieved by unburdening the OLTP threads from all additional processing and performing these tasks asynchronously.

  • Compacting Transactional Data in Hybrid OLTP & OLAP Databases
    arXiv: Databases, 2012
    Co-Authors: Florian Funke, Alfons Kemper, Thomas Neumann
    Abstract:

    Growing main memory sizes have facilitated Database management systems that keep the entire Database in main memory. The drastic performance improvements that came along with these in-memory systems have made it possible to reunite the two areas of online transaction processing (OLTP) and online analytical processing (OLAP): An emerging class of hybrid OLTP and OLAP Database systems allows to process analytical queries directly on the Transactional Data. By offering arbitrarily current snapshots of the Transactional Data for OLAP, these systems enable real-time business intelligence. Despite memory sizes of several Terabytes in a single commodity server, RAM is still a precious resource: Since free memory can be used for intermediate results in query processing, the amount of memory determines query performance to a large extent. Consequently, we propose the compaction of memory-resident Databases. Compaction consists of two tasks: First, separating the mutable working set from the immutable "frozen" Data. Second, compressing the immutable Data and optimizing it for efficient, memory-consumption-friendly snapshotting. Our approach reorganizes and compresses Transactional Data online and yet hardly affects the mission-critical OLTP throughput. This is achieved by unburdening the OLTP threads from all additional processing and performing these tasks asynchronously.

  • Compacting Transactional Data in hybrid OLTP&OLAP Databases
    Proceedings of the VLDB Endowment, 2012
    Co-Authors: Florian Funke, Alfons Kemper, Thomas Neumann
    Abstract:

    Growing main memory sizes have facilitated Database management systems that keep the entire Database in main memory. The drastic performance improvements that came along with these in-memory systems have made it possible to reunite the two areas of online transaction processing (OLTP) and online analytical processing (OLAP): An emerging class of hybrid OLTP and OLAP Database systems allows to process analytical queries directly on the Transactional Data. By offering arbitrarily current snapshots of the Transactional Data for OLAP, these systems enable real-time business intelligence. Despite memory sizes of several Terabytes in a single commodity server, RAM is still a precious resource: Since free memory can be used for intermediate results in query processing, the amount of memory determines query performance to a large extent. Consequently, we propose the compaction of memory-resident Databases. Compaction consists of two tasks: First, separating the mutable working set from the immutable "frozen" Data. Second, compressing the immutable Data and optimizing it for efficient, memory-consumption-friendly snapshotting. Our approach reorganizes and compresses Transactional Data online and yet hardly affects the mission-critical OLTP throughput. This is achieved by unburdening the OLTP threads from all additional processing and performing these tasks asynchronously.

Paolo Romano - One of the best experts on this subject based on the ideXlab platform.

  • Transactional auto scaler elastic scaling of replicated in memory Transactional Data grids
    ACM Transactions on Autonomous and Adaptive Systems, 2014
    Co-Authors: Diego Didona, Paolo Romano, Sebastiano Peluso, Francesco Quaglia
    Abstract:

    In this article, we introduce TAS (Transactional Auto Scaler), a system for automating the elastic scaling of replicated in-memory Transactional Data grids, such as NoSQL Data stores or Distributed Transactional Memories. Applications of TAS range from online self-optimization of in-production applications to the automatic generation of QoS/cost-driven elastic scaling policies, as well as to support for what-if analysis on the scalability of Transactional applications. In this article, we present the key innovation at the core of TAS, namely, a novel performance forecasting methodology that relies on the joint usage of analytical modeling and machine learning. By exploiting these two classically competing approaches in a synergic fashion, TAS achieves the best of the two worlds, namely, high extrapolation power and good accuracy, even when faced with complex workloads deployed over public cloud infrastructures. We demonstrate the accuracy and feasibility of TAS’s performance forecasting methodology via an extensive experimental study based on a fully fledged prototype implementation integrated with a popular open-source in-memory Transactional Data grid (Red Hat’s Infinispan) and industry-standard benchmarks generating a breadth of heterogeneous workloads.

  • Self-Tuning Transactional Data grids: The Cloud-TM Approach
    Proceedings - IEEE 3rd Symposium on Network Cloud Computing and Applications NCCA 2014, 2014
    Co-Authors: Diego Didona, Paolo Romano
    Abstract:

    In this paper we focus on the problem of self-tuning distributed Transactional cloud Data stores by presenting an overview of the autonomic mechanisms integrated in the Cloud-TM platform, a Transactional cloud Data store developed in the context of a recent European project. Cloud-TM takes a holistic approach to self-tuning and elastic scaling, treating them as strongly intertwined problems with the ultimate goals of i) achieving optimal efficiency at any scale of the platform, and ii) minimizing resource consumption in presence of varying workloads. From a methodological perspective, this is achieved by relying on the innovative idea of exploiting the diversity of different modelling approaches, including analytical models, machine-learning and simulations. By employing these modelling techniques in synergy, the Cloud-TM platform can dynamically optimize the underlying distributed Data store over a number of dimensions, including its scale, the strategy it adopts to distribute and replicate Data among the platforms' nodes, as well as its replication protocol. © 2014 IEEE.

  • Transactional auto scaler elastic scaling of in memory Transactional Data grids
    International Conference on Autonomic Computing, 2012
    Co-Authors: Diego Didona, Paolo Romano, Sebastiano Peluso, Francesco Quaglia
    Abstract:

    In this paper we introduce TAS (Transactional Auto Scaler), a system for automating elastic-scaling of in-memory Transactional Data grids, such as NoSQL Data stores or Distributed Transactional Memories. Applications of TAS range from on-line self-optimization of in-production applications to automatic generation of QoS/cost driven elastic scaling policies, and support for what-if analysis on the scalability of Transactional applications. The key innovation at the core of TAS is a novel performance forecasting methodology that relies on the joint usage of analytical modeling and machine-learning. By exploiting these two, classically competing, methodologies in a synergic fashion, TAS achieves the best of the two worlds, namely high extrapolation power and good accuracy even when faced with complex workloads deployed over public cloud infrastructures. We demonstrate the accuracy and feasibility of TAS via an extensive experimental study based on a fully fledged prototype implementation, integrated with a popular open-source Transactional in-memory Data store (Red Hat's Infinispan), and industry-standard benchmarks generating a breadth of heterogeneous workloads.

  • automated workload characterization in cloud based Transactional Data grids
    International Parallel and Distributed Processing Symposium, 2012
    Co-Authors: Bruno Ciciani, Diego Didona, Sebastiano Peluso, Francesco Quaglia, Pierangelo Di Sanzo, Roberto Palmieri, Paolo Romano
    Abstract:

    Cloud computing represents a cost-effective paradigm to deploy a wide class of large-scale distributed applications, for which the pay-per-use model combined with automatic resource provisioning promise to reduce the cost of dependability and scalability. However, a key challenge to be addressed to materialize the advantages promised by Cloud computing is the design of effective auto-scaling and self-tuning mechanisms capable of ensuring pre-determined QoS levels at minimum cost in face of changing workload conditions. This is one of the keys goals that are being pursued by the Cloud-TM project, a recent EU project that is developing a novel, self-optimizing Transactional Data platform for the cloud. In this paper we present the key design choices underlying the development of Cloud-TM's Workload Analyzer (WA), a crucial component of the Cloud-TM platform that is change of three key functionalities: aggregating, filtering and correlating the streams of statistical Data gathered from the various nodes of the Cloud-TM platform, building detailed workload profiles of applications deployed on the Cloud-TM platform, characterizing their present and future demands in terms of both logical (i.e. Data) and physical (e.g. hardware-related) resources, triggering alerts in presence of violations (or risks of future violations) of pre-determined SLAs.