rich internet applications

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 4971 Experts worldwide ranked by ideXlab platform

Iosif Viorel Onu - One of the best experts on this subject based on the ideXlab platform.

  • d forenria a distributed tool to reconstruct user sessions for rich internet applications
    Computer Science and Software Engineering, 2016
    Co-Authors: Salma Hooshmand, Muhammad Faheem, Guyvince Jourda, Grego Von Bochma, Russell Couturie, Iosif Viorel Onu
    Abstract:

    rich internet applications (RIAs) which use JavaScript and Ajax have become the norm for modern Web applications. However with RIA, the reconstruction of user-interactions from recorded HTTP logs is a new and challenging problem. We present D-ForenRIA a distributed tool for session-reconstruction for RIAs. D-ForenRIA provides detailed information about user actions including DOM elements involved and user-inputs provided. D-ForenRIA incorporates novel techniques to order candidate user-interactions based on DOM features and knowledge acquired during session reconstruction. In addition, using several browsers concurrently makes the system scalable for real-world use. The results of our evaluation on several RIAs show that D-ForenRIA can efficiently reconstruct use-sessions in practice.

  • d forenria distributed reconstruction of user interactions for rich internet applications
    The Web Conference, 2016
    Co-Authors: Salma Hooshmand, Muhammad Faheem, Guyvince Jourda, Grego Von Bochma, Russell Couturie, Akib Mahmud, Iosif Viorel Onu
    Abstract:

    We present D-ForenRIA, a distributed forensic tool to automatically reconstruct user-sessions in rich internet applications (RIAs), using solely the full HTTP traces of the sessions as input. D-ForenRIA recovers automatically each browser state, reconstructs the DOMs and re-creates screenshots of what was displayed to the user. The tool also recovers every action taken by the user on each state, including the user-input data. Our application domain is security forensics, where sometimes months-old sessions must be quickly reconstructed for immediate inspection. We will demonstrate our tool on a series of RIAs, including a vulnerable banking application created by IBM Security for testing purposes. In that case study, the attacker visits the vulnerable web site, and exploits several vulnerabilities (SQL-injections, XSS...) to gain access to private information and to perform unauthorized transactions. D-ForenRIA can reconstruct the session, including screenshots of all pages seen by the hacker, DOM of each page and the steps taken for unauthorized login and the inputs hacker exploited for the SQL-injection attack. D-ForenRIA is made efficient by applying advanced reconstruction techniques and by using several browsers concurrently to speed up the reconstruction process. Although we developed D-ForenRIA in the context of security forensics, the tool can also be useful in other contexts such as aided RIAs debugging and automated RIAs scanning.

  • pdist ria crawler a peer to peer distributed crawler for rich internet applications
    Web Information Systems Engineering, 2014
    Co-Authors: Seyed M Mirtaheri, Guyvince Jourda, Grego Von Bochma, Iosif Viorel Onu
    Abstract:

    Crawling rich internet applications (RIAs) is important to ensure their security, accessibility and to index them for searching. To crawl a RIA, the crawler has to reach every application state and execute every application event. On a large RIA, this operation takes a long time. Previously published GDist-RIA Crawler proposes a distributed architecture to parallelize the task of crawling RIAs, and run the crawl over multiple computers to reduce time. In GDist-RIA Crawler, a centralized unit calculates the next task to execute, and tasks are dispatched to worker nodes for execution. This architecture is not scalable due to the centralized unit which is bound to become a bottleneck as the number of nodes increases. This paper extends GDist-RIA Crawler and proposes a fully peer-to-peer and scalable architecture to crawl RIAs, called PDist-RIA Crawler. PDist-RIA doesn’t have the same limitations in terms scalability while matching the performance of GDist-RIA. We describe a prototype showing the scalability and performance of the proposed solution.

  • indexing rich internet applications using components based crawling
    International Conference on Web Engineering, 2014
    Co-Authors: Ali Moosavi, Guyvince Jourda, Grego Von Bochma, Salma Hooshmand, Sara Aghbanzadeh, Iosif Viorel Onu
    Abstract:

    Automatic crawling of rich internet applications (RIAs) is a challenge because client-side code modifies the client dynamically, fetching server-side data asynchronously. Most existing solutions model RIAs as state machines with DOMs as states and JavaScript events execution as transitions. This approach fails when used with “real-life”, complex RIAs, because the size of the produced model is much too large to be practical. In this paper, we propose a new method to crawl AJAX-based RIAs in an efficient manner by detecting “components”, which are areas of the DOM that are independent from each other, and by crawling each component separately. This leads to a dramatic reduction of the required state space for the model, without loss of content coverage. Our method does not require prior knowledge of the RIA nor predefined definition of components. Instead, we infer the components by observing the behavior of the RIA during crawling. Our experimental results show that our method can index quickly and completely industrial RIAs that are simply out of reach for traditional methods.

  • model based rich internet applications crawling menu and probability models
    Journal of Web Engineering, 2014
    Co-Authors: Suryaka Choudhary, Guyvince Jourda, Seyed M Mirtaheri, Emre Dincturk, Ggrego V Ochma, Iosif Viorel Onu
    Abstract:

    Strategies for "crawling" Web sites efficiently have been described more than a decade ago. Since then, Web applications have come a long way both in terms of adoption to provide information and services and in terms of technologies to develop them. With the emergence of richer and more advanced technologies such as AJAX, "rich internet applications" (RIAs) have become more interactive, more responsive and generally more user friendly. Unfortunately, we have also lost our ability to crawl them. Building models of applications automatically is important not only for indexing content, but also to do automated testing, automated security assessments, automated accessibility assessment and in general to use software engineering tools. We must regain our ability to efficiently construct models for these RIAs. In this paper, we present two methods, based on "Model-Based Crawling" (MBC) first introduced in [1]: the "menu" model and the "probability" model. These two methods are shown to be more effective at extracting models than previously published methods, and are much simpler to implement than previous models for MBC. A distributed implementation of the probability model is also discussed. We compare these methods and others against a set of experimental and "real" RIAs, showing that in our experiments, these methods find the set of client states faster than other approaches, and often finish the crawl faster as well.

Porfirio Tramontana - One of the best experts on this subject based on the ideXlab platform.

  • reverse engineering techniques from web applications to rich internet applications
    Symposium on Web Systems Evolution, 2013
    Co-Authors: Porfirio Tramontana, Domenico Amalfitano, Anna Rita Fasolino
    Abstract:

    Web systems evolved in the last years starting from static websites to Web applications, up to Ajax-based rich internet applications (RIAs). Reverse Engineering techniques followed the same evolution, too. The authors and many other WSE contributors proposed a lot of innovative and effective ideas providing important advances in the reverse engineering field. In this paper, we will show the historical evolution of reverse engineering approaches for Web Systems with particular attention to the ones presented in the WSE events.

  • Techniques and tools for rich internet applications testing
    2010 12th IEEE International Symposium on Web Systems Evolution (WSE), 2010
    Co-Authors: Domenico Amalfitano, Anna Rita Fasolino, Porfirio Tramontana
    Abstract:

    The User Interfaces of rich internet applications (RIAs) present a richer functionality and enhanced usability than the ones of traditional Web applications which are obtained by means of a successful combination of heterogeneous technologies, frameworks, and communication models. Due to its increased complexity, dynamicity, and responsiveness, testing the user interfaces of an RIA is more complex than testing the user interfaces of a traditional Web application and requires that effective and efficient testing techniques are proposed and validated. In this paper we analyse the most critical open issues in RIA testing automation and propose a classification framework that characterizes existing RIA testing techniques from four different perspectives. Driven by this classification, we present a set of testing techniques that can be used for automatically and semi-automatically generating test cases, for executing them and evaluating their results. Some examples of applying the proposed techniques for testing real Ajax applications will also be shown in the paper.

  • experimenting a reverse engineering technique for modelling the behaviour of rich internet applications
    International Conference on Software Maintenance, 2009
    Co-Authors: Domenico Amalfitano, Anna Rita Fasolino, Porfirio Tramontana
    Abstract:

    While the rapid and growing diffusion of rich internet applications (RIAs) with their enhanced interactive, responsive and dynamic behaviour is sharpening the distance between Web applications and desktop applications, at the same time, the maintenance community is experiencing the need for effective analysis approaches for understanding and modelling this behaviour adequately. This paper presents a reverse engineering technique based on dynamic analysis and supported by a tool that reconstructs a model of the RIA behaviour based on Finite State Machines. The technique is based on the analysis of the RIA user interface evolution shown in user sessions, and exploits user interface equivalence criteria for abstracting relevant states and state transitions to be included in the model. For assessing the effectiveness and the cost of this technique, an experiment involving four distinct RIAs implemented with AJAX technique was carried out.

  • reverse engineering finite state machines from rich internet applications
    Working Conference on Reverse Engineering, 2008
    Co-Authors: Domenico Amalfitano, Anna Rita Fasolino, Porfirio Tramontana
    Abstract:

    In the last years, rich internet applications (RIAs) have emerged as a new generation of Web applications offering greater usability and interactivity than traditional ones. At the same time, RIAs introduce new issues and challenges in all the Web application lifecycle activities. As an example, a key problem with RIAs consists of defining suitable software models for representing them and validating reverse engineering techniques for obtaining these models effectively.This paper presents a reverse engineering approach for abstracting finite state machines representing the client-side behaviour offered by RIAs. The approach is based on dynamic analysis of the RIA and employs clustering techniques for solving the problem of state explosion of the state machine. A case study illustrated in the paper shows the results of a preliminary experiment where the proposed process has been executed with success for reverse engineering the behaviour of an existing RIA.

Guyvince Jourda - One of the best experts on this subject based on the ideXlab platform.

  • d forenria a distributed tool to reconstruct user sessions for rich internet applications
    Computer Science and Software Engineering, 2016
    Co-Authors: Salma Hooshmand, Muhammad Faheem, Guyvince Jourda, Grego Von Bochma, Russell Couturie, Iosif Viorel Onu
    Abstract:

    rich internet applications (RIAs) which use JavaScript and Ajax have become the norm for modern Web applications. However with RIA, the reconstruction of user-interactions from recorded HTTP logs is a new and challenging problem. We present D-ForenRIA a distributed tool for session-reconstruction for RIAs. D-ForenRIA provides detailed information about user actions including DOM elements involved and user-inputs provided. D-ForenRIA incorporates novel techniques to order candidate user-interactions based on DOM features and knowledge acquired during session reconstruction. In addition, using several browsers concurrently makes the system scalable for real-world use. The results of our evaluation on several RIAs show that D-ForenRIA can efficiently reconstruct use-sessions in practice.

  • d forenria distributed reconstruction of user interactions for rich internet applications
    The Web Conference, 2016
    Co-Authors: Salma Hooshmand, Muhammad Faheem, Guyvince Jourda, Grego Von Bochma, Russell Couturie, Akib Mahmud, Iosif Viorel Onu
    Abstract:

    We present D-ForenRIA, a distributed forensic tool to automatically reconstruct user-sessions in rich internet applications (RIAs), using solely the full HTTP traces of the sessions as input. D-ForenRIA recovers automatically each browser state, reconstructs the DOMs and re-creates screenshots of what was displayed to the user. The tool also recovers every action taken by the user on each state, including the user-input data. Our application domain is security forensics, where sometimes months-old sessions must be quickly reconstructed for immediate inspection. We will demonstrate our tool on a series of RIAs, including a vulnerable banking application created by IBM Security for testing purposes. In that case study, the attacker visits the vulnerable web site, and exploits several vulnerabilities (SQL-injections, XSS...) to gain access to private information and to perform unauthorized transactions. D-ForenRIA can reconstruct the session, including screenshots of all pages seen by the hacker, DOM of each page and the steps taken for unauthorized login and the inputs hacker exploited for the SQL-injection attack. D-ForenRIA is made efficient by applying advanced reconstruction techniques and by using several browsers concurrently to speed up the reconstruction process. Although we developed D-ForenRIA in the context of security forensics, the tool can also be useful in other contexts such as aided RIAs debugging and automated RIAs scanning.

  • workshop on the application of security and testing to rich internet applications
    Computer Science and Software Engineering, 2015
    Co-Authors: Guyvince Jourda, Grego Von Bochma, Ettore Merlo, James Mille, Vio Onu
    Abstract:

    Web applications and service-oriented architectures represent an increasing part of modern software, and use more advanced techniques such as Ajax and HTML5, CCS3, JavaScript, AJAX, websockets and Client-side data. These new applications, sometimes called "rich internet applications" (RIAs), are now being used routinely, either directly or deployed as cloud services. In addition, more and more of these applications and services are accessed from mobile devices; meanwhile, wireless connected devices rapidly grow in number to connect to the "internet-of-things".

  • pdist ria crawler a peer to peer distributed crawler for rich internet applications
    Web Information Systems Engineering, 2014
    Co-Authors: Seyed M Mirtaheri, Guyvince Jourda, Grego Von Bochma, Iosif Viorel Onu
    Abstract:

    Crawling rich internet applications (RIAs) is important to ensure their security, accessibility and to index them for searching. To crawl a RIA, the crawler has to reach every application state and execute every application event. On a large RIA, this operation takes a long time. Previously published GDist-RIA Crawler proposes a distributed architecture to parallelize the task of crawling RIAs, and run the crawl over multiple computers to reduce time. In GDist-RIA Crawler, a centralized unit calculates the next task to execute, and tasks are dispatched to worker nodes for execution. This architecture is not scalable due to the centralized unit which is bound to become a bottleneck as the number of nodes increases. This paper extends GDist-RIA Crawler and proposes a fully peer-to-peer and scalable architecture to crawl RIAs, called PDist-RIA Crawler. PDist-RIA doesn’t have the same limitations in terms scalability while matching the performance of GDist-RIA. We describe a prototype showing the scalability and performance of the proposed solution.

  • indexing rich internet applications using components based crawling
    International Conference on Web Engineering, 2014
    Co-Authors: Ali Moosavi, Guyvince Jourda, Grego Von Bochma, Salma Hooshmand, Sara Aghbanzadeh, Iosif Viorel Onu
    Abstract:

    Automatic crawling of rich internet applications (RIAs) is a challenge because client-side code modifies the client dynamically, fetching server-side data asynchronously. Most existing solutions model RIAs as state machines with DOMs as states and JavaScript events execution as transitions. This approach fails when used with “real-life”, complex RIAs, because the size of the produced model is much too large to be practical. In this paper, we propose a new method to crawl AJAX-based RIAs in an efficient manner by detecting “components”, which are areas of the DOM that are independent from each other, and by crawling each component separately. This leads to a dramatic reduction of the required state space for the model, without loss of content coverage. Our method does not require prior knowledge of the RIA nor predefined definition of components. Instead, we infer the components by observing the behavior of the RIA during crawling. Our experimental results show that our method can index quickly and completely industrial RIAs that are simply out of reach for traditional methods.

Grego Von Bochma - One of the best experts on this subject based on the ideXlab platform.

  • d forenria a distributed tool to reconstruct user sessions for rich internet applications
    Computer Science and Software Engineering, 2016
    Co-Authors: Salma Hooshmand, Muhammad Faheem, Guyvince Jourda, Grego Von Bochma, Russell Couturie, Iosif Viorel Onu
    Abstract:

    rich internet applications (RIAs) which use JavaScript and Ajax have become the norm for modern Web applications. However with RIA, the reconstruction of user-interactions from recorded HTTP logs is a new and challenging problem. We present D-ForenRIA a distributed tool for session-reconstruction for RIAs. D-ForenRIA provides detailed information about user actions including DOM elements involved and user-inputs provided. D-ForenRIA incorporates novel techniques to order candidate user-interactions based on DOM features and knowledge acquired during session reconstruction. In addition, using several browsers concurrently makes the system scalable for real-world use. The results of our evaluation on several RIAs show that D-ForenRIA can efficiently reconstruct use-sessions in practice.

  • d forenria distributed reconstruction of user interactions for rich internet applications
    The Web Conference, 2016
    Co-Authors: Salma Hooshmand, Muhammad Faheem, Guyvince Jourda, Grego Von Bochma, Russell Couturie, Akib Mahmud, Iosif Viorel Onu
    Abstract:

    We present D-ForenRIA, a distributed forensic tool to automatically reconstruct user-sessions in rich internet applications (RIAs), using solely the full HTTP traces of the sessions as input. D-ForenRIA recovers automatically each browser state, reconstructs the DOMs and re-creates screenshots of what was displayed to the user. The tool also recovers every action taken by the user on each state, including the user-input data. Our application domain is security forensics, where sometimes months-old sessions must be quickly reconstructed for immediate inspection. We will demonstrate our tool on a series of RIAs, including a vulnerable banking application created by IBM Security for testing purposes. In that case study, the attacker visits the vulnerable web site, and exploits several vulnerabilities (SQL-injections, XSS...) to gain access to private information and to perform unauthorized transactions. D-ForenRIA can reconstruct the session, including screenshots of all pages seen by the hacker, DOM of each page and the steps taken for unauthorized login and the inputs hacker exploited for the SQL-injection attack. D-ForenRIA is made efficient by applying advanced reconstruction techniques and by using several browsers concurrently to speed up the reconstruction process. Although we developed D-ForenRIA in the context of security forensics, the tool can also be useful in other contexts such as aided RIAs debugging and automated RIAs scanning.

  • workshop on the application of security and testing to rich internet applications
    Computer Science and Software Engineering, 2015
    Co-Authors: Guyvince Jourda, Grego Von Bochma, Ettore Merlo, James Mille, Vio Onu
    Abstract:

    Web applications and service-oriented architectures represent an increasing part of modern software, and use more advanced techniques such as Ajax and HTML5, CCS3, JavaScript, AJAX, websockets and Client-side data. These new applications, sometimes called "rich internet applications" (RIAs), are now being used routinely, either directly or deployed as cloud services. In addition, more and more of these applications and services are accessed from mobile devices; meanwhile, wireless connected devices rapidly grow in number to connect to the "internet-of-things".

  • pdist ria crawler a peer to peer distributed crawler for rich internet applications
    Web Information Systems Engineering, 2014
    Co-Authors: Seyed M Mirtaheri, Guyvince Jourda, Grego Von Bochma, Iosif Viorel Onu
    Abstract:

    Crawling rich internet applications (RIAs) is important to ensure their security, accessibility and to index them for searching. To crawl a RIA, the crawler has to reach every application state and execute every application event. On a large RIA, this operation takes a long time. Previously published GDist-RIA Crawler proposes a distributed architecture to parallelize the task of crawling RIAs, and run the crawl over multiple computers to reduce time. In GDist-RIA Crawler, a centralized unit calculates the next task to execute, and tasks are dispatched to worker nodes for execution. This architecture is not scalable due to the centralized unit which is bound to become a bottleneck as the number of nodes increases. This paper extends GDist-RIA Crawler and proposes a fully peer-to-peer and scalable architecture to crawl RIAs, called PDist-RIA Crawler. PDist-RIA doesn’t have the same limitations in terms scalability while matching the performance of GDist-RIA. We describe a prototype showing the scalability and performance of the proposed solution.

  • indexing rich internet applications using components based crawling
    International Conference on Web Engineering, 2014
    Co-Authors: Ali Moosavi, Guyvince Jourda, Grego Von Bochma, Salma Hooshmand, Sara Aghbanzadeh, Iosif Viorel Onu
    Abstract:

    Automatic crawling of rich internet applications (RIAs) is a challenge because client-side code modifies the client dynamically, fetching server-side data asynchronously. Most existing solutions model RIAs as state machines with DOMs as states and JavaScript events execution as transitions. This approach fails when used with “real-life”, complex RIAs, because the size of the produced model is much too large to be practical. In this paper, we propose a new method to crawl AJAX-based RIAs in an efficient manner by detecting “components”, which are areas of the DOM that are independent from each other, and by crawling each component separately. This leads to a dramatic reduction of the required state space for the model, without loss of content coverage. Our method does not require prior knowledge of the RIA nor predefined definition of components. Instead, we infer the components by observing the behavior of the RIA during crawling. Our experimental results show that our method can index quickly and completely industrial RIAs that are simply out of reach for traditional methods.

Domenico Amalfitano - One of the best experts on this subject based on the ideXlab platform.

  • reverse engineering techniques from web applications to rich internet applications
    Symposium on Web Systems Evolution, 2013
    Co-Authors: Porfirio Tramontana, Domenico Amalfitano, Anna Rita Fasolino
    Abstract:

    Web systems evolved in the last years starting from static websites to Web applications, up to Ajax-based rich internet applications (RIAs). Reverse Engineering techniques followed the same evolution, too. The authors and many other WSE contributors proposed a lot of innovative and effective ideas providing important advances in the reverse engineering field. In this paper, we will show the historical evolution of reverse engineering approaches for Web Systems with particular attention to the ones presented in the WSE events.

  • Techniques and tools for rich internet applications testing
    2010 12th IEEE International Symposium on Web Systems Evolution (WSE), 2010
    Co-Authors: Domenico Amalfitano, Anna Rita Fasolino, Porfirio Tramontana
    Abstract:

    The User Interfaces of rich internet applications (RIAs) present a richer functionality and enhanced usability than the ones of traditional Web applications which are obtained by means of a successful combination of heterogeneous technologies, frameworks, and communication models. Due to its increased complexity, dynamicity, and responsiveness, testing the user interfaces of an RIA is more complex than testing the user interfaces of a traditional Web application and requires that effective and efficient testing techniques are proposed and validated. In this paper we analyse the most critical open issues in RIA testing automation and propose a classification framework that characterizes existing RIA testing techniques from four different perspectives. Driven by this classification, we present a set of testing techniques that can be used for automatically and semi-automatically generating test cases, for executing them and evaluating their results. Some examples of applying the proposed techniques for testing real Ajax applications will also be shown in the paper.

  • experimenting a reverse engineering technique for modelling the behaviour of rich internet applications
    International Conference on Software Maintenance, 2009
    Co-Authors: Domenico Amalfitano, Anna Rita Fasolino, Porfirio Tramontana
    Abstract:

    While the rapid and growing diffusion of rich internet applications (RIAs) with their enhanced interactive, responsive and dynamic behaviour is sharpening the distance between Web applications and desktop applications, at the same time, the maintenance community is experiencing the need for effective analysis approaches for understanding and modelling this behaviour adequately. This paper presents a reverse engineering technique based on dynamic analysis and supported by a tool that reconstructs a model of the RIA behaviour based on Finite State Machines. The technique is based on the analysis of the RIA user interface evolution shown in user sessions, and exploits user interface equivalence criteria for abstracting relevant states and state transitions to be included in the model. For assessing the effectiveness and the cost of this technique, an experiment involving four distinct RIAs implemented with AJAX technique was carried out.

  • reverse engineering finite state machines from rich internet applications
    Working Conference on Reverse Engineering, 2008
    Co-Authors: Domenico Amalfitano, Anna Rita Fasolino, Porfirio Tramontana
    Abstract:

    In the last years, rich internet applications (RIAs) have emerged as a new generation of Web applications offering greater usability and interactivity than traditional ones. At the same time, RIAs introduce new issues and challenges in all the Web application lifecycle activities. As an example, a key problem with RIAs consists of defining suitable software models for representing them and validating reverse engineering techniques for obtaining these models effectively.This paper presents a reverse engineering approach for abstracting finite state machines representing the client-side behaviour offered by RIAs. The approach is based on dynamic analysis of the RIA and employs clustering techniques for solving the problem of state explosion of the state machine. A case study illustrated in the paper shows the results of a preliminary experiment where the proposed process has been executed with success for reverse engineering the behaviour of an existing RIA.