Test Automation

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 30468 Experts worldwide ranked by ideXlab platform

Rajesh Subramanyan - One of the best experts on this subject based on the ideXlab platform.

  • COMPSAC (1) - Test Automation in Practice
    31st Annual International Computer Software and Applications Conference - Vol. 1- (COMPSAC 2007), 2007
    Co-Authors: Rajesh Subramanyan
    Abstract:

    Developing and implementing a successful Test Automation strategy can provide enormous benefit for a software project. However, automating Tests is not cheap or easy. It does not replace the need for manual Testing or enable to "down-size" the Testing group. Automated Testing can be made to be cost-effective, if best practices are applied to the process. This panel goal is to discuss techniques that are able to facilitate the adoption of Test Automation in practice.

  • GI Jahrestagung (2) - Industrial Requirements to Benefit from Test Automation Tools for GUI Testing.
    2007
    Co-Authors: Christof J. Budnik, Rajesh Subramanyan, Marlon Vieira
    Abstract:

    In addition to the growing complexity of software systems, Test effort takes increasing amounts of time and correspondingly more money. Testing costs may be reduced without compromising on software quality by minimizing Test sets through optimal selection of Test cases and introducing more powerful Test tools. Attaining high levels of Test Automation is the objective. There are problems which make the introduction of Test Automation in industry quite difficult. Solution providers and tool developers often do not understand the requirements in industry for Test Automation. Otherwise introducing Test Automation could become counterproductive. This paper points out essential demands on GUI Test tools for industrial purpose.

Sigrid Eldh - One of the best experts on this subject based on the ideXlab platform.

  • A self-assessment Instrument for assessing Test Automation maturity
    Proceedings of the Evaluation and Assessment on Software Engineering, 2019
    Co-Authors: Yuqing Wang, Kristian Wiklund, Sigrid Eldh, Mika V. Mäntylä, Jouni Markkula, Tatu Kairi, Päivi Raulamo-jurvanen, Antti Haukinen
    Abstract:

    Test Automation is important in software industry but self-assessment instruments for assessing its maturity are not sufficient. The two objectives of this study are to synthesize what an organization should focus to assess its Test Automation; develop a self-assessment instrument (a survey) for assessing Test Automation maturity and scientifically evaluate it. We carried out the study in four stages. First, a literature review of 25 sources was conducted. Second, the initial instrument was developed. Third, seven experts from five companies evaluated the initial instrument. Content Validity Index and Cognitive Interview methods were used. Fourth, we revised the developed instrument. Our contributions are as follows: (a) we collected practices mapped into 15 KAs that indicate where an organization should focus to assess its Test Automation; (b) we developed and evaluated a self-assessment instrument for assessing Test Automation maturity; (c) we discuss important topics such as response bias that threatens self-assessment instruments. Our results help companies and researchers to understand and improve Test Automation practices and processes.

  • Summary of the 1st IEEE Workshop on the Next Level of Test Automation (NEXTA 2018)
    ACM Sigsoft Software Engineering Notes, 2019
    Co-Authors: Markus Borg, Adnan Causevic, Serge Demeyer, Sigrid Eldh
    Abstract:

    NEXTA is a new workshop on Test Automation that provides a meeting point for academic researchers and industry practitioners. While Test Automation already is an established practice in industry, t...

  • Impediments for software Test Automation: A systematic literature review
    Software Testing Verification and Reliability, 2017
    Co-Authors: Kristian Wiklund, Sigrid Eldh, Daniel Sundmark, Kristina Lundqvist
    Abstract:

    Summary Automated software Testing is a critical enabler for modern software development, where rapid feedback on the product quality is expected. To make the Testing work well, it is of high importance that impediments related to Test Automation are prevented and removed quickly. An enabling factor for all types of improvement is to understand the nature of what is to be improved. We have performed a systematic literature review of reported impediments related to software Test Automation to contribute to this understanding. In this paper, we present the results from the systematic literature review: The list of identified publications, a categorization of identified impediments, and a qualitative discussion of the impediments proposing a socio-technical system model of the use and implementation of Test Automation.

  • ICST Workshops - Towards a Test Automation Improvement Model (TAIM)
    2014 IEEE Seventh International Conference on Software Testing Verification and Validation Workshops, 2014
    Co-Authors: Sigrid Eldh, Kenneth Andersson, Andreas Ermedahl, Kristian Wiklund
    Abstract:

    In agile software development, industries are becoming more dependent on automated Test suites. Thus, the Test code quality is an important factor for the overall system quality and maintainability. We propose a Test Automation Improvement Model (TAIM) defining ten key areas and one general area. Each area should be based on measurements, to fill the gap of existing assessments models. The main contribution of this paper is to provide the outline of TAIM and present our intermediate results and some initial metrics to support our model. Our initial target has been the key area targeting implementation and structure of Test code. We have used common static measurements to compare the Test code and the source code of a unit Test Automation suite being part of a large complex telecom subsystem. Our intermediate results show that it is possible to outline such an improvement model and our metrics approach seems promising. However, to get a generic useful model to aid Test Automation evolution and provide for comparable measurements, many problems still remain to be solved. TAIM can as such be viewed as a framework to guide the research on metrics for Test Automation artifacts.

  • Technical Debt in Test Automation
    2012 IEEE Fifth International Conference on Software Testing, Verification and Validation, 2012
    Co-Authors: Kristian Wiklund, Sigrid Eldh, Daniel Sundmark, Karsten Lundqvist
    Abstract:

    Automated Test execution is one of the more popular and available strategies to minimize the cost for software Testing, and is also becoming one of the central concepts in modern software development as methods such as Test-driven development gain popularity. Published studies on Test Automation indicate that the maintenance and development of Test Automation tools commonly encounter problems due to unforeseen issues. To further investigate this, we performed a case study on a telecommunication subsystem to seek factors that contribute to inefficiencies in use, maintenance, and development of the automated Testing performed within the scope of responsibility of a software design team. A qualitative evaluation of the findings indicates that the main areas of improvement in this case are in the fields of interaction design and general software design principles, as applied to Test execution system development.

Kristian Wiklund - One of the best experts on this subject based on the ideXlab platform.

  • ICST Workshops - The Next Level of Test Automation (NEXTA 2020)
    2020 IEEE International Conference on Software Testing Verification and Validation Workshops (ICSTW), 2020
    Co-Authors: Serge Demeyer, Kristian Wiklund, Adnan Causevic, Pasqualina Potena
    Abstract:

    Test Automation has been an acknowledged software engineering best practice for years. However, the topic involves more than the repeated execution of Test cases that often comes first to mind. Simply running Test cases using a unit Testing framework is no longer enough for Test Automation to keep up with the ever-shorter release cycles driven by continuous deployment and technological innovations such as microservices and DevOps pipelines. Now Test Automation needs to rise to the next level by going beyond mere Test execution.

  • A self-assessment Instrument for assessing Test Automation maturity
    Proceedings of the Evaluation and Assessment on Software Engineering, 2019
    Co-Authors: Yuqing Wang, Kristian Wiklund, Sigrid Eldh, Mika V. Mäntylä, Jouni Markkula, Tatu Kairi, Päivi Raulamo-jurvanen, Antti Haukinen
    Abstract:

    Test Automation is important in software industry but self-assessment instruments for assessing its maturity are not sufficient. The two objectives of this study are to synthesize what an organization should focus to assess its Test Automation; develop a self-assessment instrument (a survey) for assessing Test Automation maturity and scientifically evaluate it. We carried out the study in four stages. First, a literature review of 25 sources was conducted. Second, the initial instrument was developed. Third, seven experts from five companies evaluated the initial instrument. Content Validity Index and Cognitive Interview methods were used. Fourth, we revised the developed instrument. Our contributions are as follows: (a) we collected practices mapped into 15 KAs that indicate where an organization should focus to assess its Test Automation; (b) we developed and evaluated a self-assessment instrument for assessing Test Automation maturity; (c) we discuss important topics such as response bias that threatens self-assessment instruments. Our results help companies and researchers to understand and improve Test Automation practices and processes.

  • ICST Workshops - The Next Level of Test Automation: What About the Users?
    2018 IEEE International Conference on Software Testing Verification and Validation Workshops (ICSTW), 2018
    Co-Authors: Kristian Wiklund, Monika Wiklund
    Abstract:

    Test Automation is an enabler for effective and efficient software development and is widely embraced thanks to agile practices such as continuous integration. Based on our own experience and observations, we conjecture that the majority of Test Automation today is script based and focused on execution, where Testers explicitly implement "how" to do the Test. This is a labor-intensive and potentially error-prone endeavor that have the potential to be a significant bottleneck for even productivity and speed in software development. Hence, we consider it clear that "the next level of Test Automation" to a greater extent will be about "what" to Test and not "how" to Test it, which will require more advanced Test tools and Test systems compared to today's relatively simple mechanisms. We propose that this will result in an increased dependency to tools developed outside the user organizations, partially due to the cost and complexity of development of advanced tools, and partially due to the competence needed to understand and implement the necessary algorithms which simply will not be available in most development projects. In this paper, we draw on our experience from using and developing Test Automation tools to propose a number of challenges that we believe need to be considered when introducing new approaches to Test Automation in industry.

  • Impediments for software Test Automation: A systematic literature review
    Software Testing Verification and Reliability, 2017
    Co-Authors: Kristian Wiklund, Sigrid Eldh, Daniel Sundmark, Kristina Lundqvist
    Abstract:

    Summary Automated software Testing is a critical enabler for modern software development, where rapid feedback on the product quality is expected. To make the Testing work well, it is of high importance that impediments related to Test Automation are prevented and removed quickly. An enabling factor for all types of improvement is to understand the nature of what is to be improved. We have performed a systematic literature review of reported impediments related to software Test Automation to contribute to this understanding. In this paper, we present the results from the systematic literature review: The list of identified publications, a categorization of identified impediments, and a qualitative discussion of the impediments proposing a socio-technical system model of the use and implementation of Test Automation.

  • ICST Workshops - Towards a Test Automation Improvement Model (TAIM)
    2014 IEEE Seventh International Conference on Software Testing Verification and Validation Workshops, 2014
    Co-Authors: Sigrid Eldh, Kenneth Andersson, Andreas Ermedahl, Kristian Wiklund
    Abstract:

    In agile software development, industries are becoming more dependent on automated Test suites. Thus, the Test code quality is an important factor for the overall system quality and maintainability. We propose a Test Automation Improvement Model (TAIM) defining ten key areas and one general area. Each area should be based on measurements, to fill the gap of existing assessments models. The main contribution of this paper is to provide the outline of TAIM and present our intermediate results and some initial metrics to support our model. Our initial target has been the key area targeting implementation and structure of Test code. We have used common static measurements to compare the Test code and the source code of a unit Test Automation suite being part of a large complex telecom subsystem. Our intermediate results show that it is possible to outline such an improvement model and our metrics approach seems promising. However, to get a generic useful model to aid Test Automation evolution and provide for comparable measurements, many problems still remain to be solved. TAIM can as such be viewed as a framework to guide the research on metrics for Test Automation artifacts.

Suresh Thummalapenta - One of the best experts on this subject based on the ideXlab platform.

  • Efficient and change-resilient Test Automation: An industrial case study
    Proceedings - International Conference on Software Engineering, 2013
    Co-Authors: Suresh Thummalapenta, Sivagami Gnanasundaram, Deepa D. Nagaraj, Pranavadatta Devaki, Sampath Kumar, Saurabh Sinha, Satish Chandra, Sathish Kumar
    Abstract:

    Test Automation, which involves the conversion of manual Test\ncases to executable Test scripts, is necessary to carry out efficient\nregression Testing of GUI-based applications. However, Test Automation\ntakes significant investment of time and skilled effort. Moreover, it is\nnot a one-time investment: as the application or its environment\nevolves, Test scripts demand continuous patching. Thus, it is\nchallenging to perform Test Automation in a cost-effective manner. At\nIBM, we developed a tool, called ATA [1], [2], to meet this challenge.\nATA has novel features that are designed to lower the cost of initial\nTest Automation significantly. Moreover, ATA has the ability to patch\nscripts automatically for certain types of application or environment\nchanges. How well does ATA meet its objectives in the real world? In\nthis paper, we present a detailed case study in the context of a\nchallenging production environment: an enterprise web application that\nhas over 6500 manual Test cases, comes in two variants, evolves\nfrequently, and needs to be Tested on multiple browsers in\ntime-constrained and resource-constrained regression cycles. We measured\nhow well ATA improved the efficiency in initial Automation. We also\nevaluated the effectiveness of ATA's change-resilience along multiple\ndimensions: application versions, browsers, and browser versions. Our\nstudy highlights several lessons for Test-Automation practitioners as\nwell as open research problems in Test Automation.

  • ICSE - Efficient and change-resilient Test Automation: an industrial case study
    2013 35th International Conference on Software Engineering (ICSE), 2013
    Co-Authors: Suresh Thummalapenta, Sivagami Gnanasundaram, Deepa D. Nagaraj, Pranavadatta Devaki, Sampath Kumar, Saurabh Sinha, Satish Chandra, Sathish Kumar
    Abstract:

    Test Automation, which involves the conversion of manual Test cases to executable Test scripts, is necessary to carry out efficient regression Testing of GUI-based applications. However, Test Automation takes significant investment of time and skilled effort. Moreover, it is not a one-time investment: as the application or its environment evolves, Test scripts demand continuous patching. Thus, it is challenging to perform Test Automation in a cost-effective manner. At IBM, we developed a tool, called ATA [1], [2], to meet this challenge. ATA has novel features that are designed to lower the cost of initial Test Automation significantly. Moreover, ATA has the ability to patch scripts automatically for certain types of application or environment changes. How well does ATA meet its objectives in the real world? In this paper, we present a detailed case study in the context of a challenging production environment: an enterprise web application that has over 6500 manual Test cases, comes in two variants, evolves frequently, and needs to be Tested on multiple browsers in time-constrained and resource-constrained regression cycles. We measured how well ATA improved the efficiency in initial Automation. We also evaluated the effectiveness of ATA's change-resilience along multiple dimensions: application versions, browsers, and browser versions. Our study highlights several lessons for Test-Automation practitioners as well as open research problems in Test Automation.

  • automating Test Automation
    International Conference on Software Engineering, 2012
    Co-Authors: Suresh Thummalapenta, Saurabh Sinha, Nimit Singhania, Satish Chandra
    Abstract:

    Mention Test case, and it conjures up image of a script or a program that exercises a system under Test. In industrial practice, however, Test cases often start out as steps described in natural language. These are essentially directions a human Tester needs to follow to interact with an application, exercising a given scenario. Since Tests need to be executed repeatedly, such manual Tests then have to go through Test Automation to create scripts or programs out of them. Test Automation can be expensive in programmer time. We describe a technique to automate Test Automation. The input to our technique is a sequence of steps written in natural language, and the output is a sequence of procedure calls with accompanying parameters that can drive the application without human intervention. The technique is based on looking at the natural language Test steps as consisting of segments that describe actions on targets, except that there can be ambiguity in the action itself, in the order in which segments occur, and in the specification of the target of the action. The technique resolves this ambiguity by backtracking, until it can synthesize a successful sequence of calls. We present an evaluation of our technique on professionally created manual Test cases for two open-source web applications as well as a proprietary enterprise application. Our technique could automate over 82% of the steps contained in these Test cases with no human intervention, indicating that the technique can reduce the cost of Test Automation quite effectively.

  • ICSE - Automating Test Automation
    2012 34th International Conference on Software Engineering (ICSE), 2012
    Co-Authors: Suresh Thummalapenta, Saurabh Sinha, Nimit Singhania, Satish Chandra
    Abstract:

    Mention “Test case”, and it conjures up the image of a script or a program that exercises a system under Test. In industrial practice, however, Test cases often start out as steps described in natural language. These are essentially directions a human Tester needs to follow to interact with an application, exercising a given scenario. Since Tests need to be executed repeatedly, such manual Tests then have to go through Test Automation to create scripts or programs out of them. Test Automation can be expensive in programmer time. We describe a technique to automate Test Automation. The input to our technique is a sequence of steps written in natural language, and the output is a sequence of procedure calls with accompanying parameters that can drive the application without human intervention. The technique is based on looking at the natural language Test steps as consisting of segments that describe actions on targets, except that there can be ambiguity in identifying segments, in identifying the action in a segment, as well as in the specification of the target of the action. The technique resolves this ambiguity by backtracking, until it can synthesize a successful sequence of calls. We present an evaluation of our technique on professionally created manual Test cases for two open-source web applications as well as a proprietary enterprise application. Our technique could automate over 82% of the steps contained in these Test cases with no human intervention, indicating that the technique can reduce the cost of Test Automation quite effectively.

Sam Malek - One of the best experts on this subject based on the ideXlab platform.

  • Test Automation in Open-Source Android Apps: A Large-Scale Empirical Study
    Proceedings of the 35th IEEE ACM International Conference on Automated Software Engineering, 2020
    Co-Authors: Jun-wei Lin, Navid Salehnamadi, Sam Malek
    Abstract:

    Automated Testing of mobile apps has received significant attention in recent years from researchers and practitioners alike. In this paper, we report on the largest empirical study to date, aimed at understanding the Test Automation culture prevalent among mobile app developers. We systematically examined more than 3.5 million repositories on GitHub and identified more than 12,000 non-trivial and real-world Android apps. We then analyzed these non-trivial apps to investigate (1) the prevalence of adoption of Test Automation; (2) working habits of mobile app developers in regards to automated Testing; and (3) the correlation between the adoption of Test Automation and the popularity of projects. Among others, we found that (1) only 8% of the mobile app development projects leverage automated Testing practices; (2) developers tend to follow the same Test Automation practices across projects; and (3) popular projects, measured in terms of the number of contributors, stars, and forks on GitHub, are more likely to adopt Test Automation practices. To understand the rationale behind our observations, we further conducted a survey with 148 professional and experienced developers contributing to the subject apps. Our findings shed light on the current practices and future research directions pertaining to Test Automation for mobile app development.