Production Code

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 324 Experts worldwide ranked by ideXlab platform

Alberto Bacchelli - One of the best experts on this subject based on the ideXlab platform.

  • Mock objects for testing java systems
    Empirical Software Engineering, 2019
    Co-Authors: Davide Spadini, Magiel Bruntink, Mauricio Aniche, Alberto Bacchelli
    Abstract:

    When testing software artifacts that have several dependencies, one has the possibility of either instantiating these dependencies or using mock objects to simulate the dependencies’ expected behavior. Even though recent quantitative studies showed that mock objects are widely used both in open source and proprietary projects, scientific knowledge is still lacking on how and why practitioners use mocks. An empirical understanding of the situations where developers have (and have not) been applying mocks, as well as the impact of such decisions in terms of coupling and software evolution can be used to help practitioners adapt and improve their future usage. To this aim, we study the usage of mock objects in three OSS projects and one industrial system. More specifically, we manually analyze more than 2,000 mock usages. We then discuss our findings with developers from these systems, and identify practices, rationales, and challenges. These results are supported by a structured survey with more than 100 professionals. Finally, we manually analyze how the usage of mock objects in test Code evolve over time as well as the impact of their usage on the coupling between test and Production Code. Our study reveals that the usage of mocks is highly dependent on the responsibility and the architectural concern of the class. Developers report to frequently mock dependencies that make testing difficult (e.g., infrastructure-related dependencies) and to not mock classes that encapsulate domain concepts/rules of the system. Among the key challenges, developers report that maintaining the behavior of the mock compatible with the behavior of original class is hard and that mocking increases the coupling between the test and the Production Code. Their perceptions are confirmed by our data, as we observed that mocks mostly exist since the very first version of the test class, and that they tend to stay there for its whole lifetime, and that changes in Production Code often force the test Code to also change.

  • Test-Driven Code Review: An Empirical Study
    2019 IEEE ACM 41st International Conference on Software Engineering (ICSE), 2019
    Co-Authors: Davide Spadini, Fabio Palomba, Tobias Baum, Stefan Hanenberg, Magiel Bruntink, Alberto Bacchelli
    Abstract:

    Test-Driven Code Review (TDR) is a Code review practice in which a reviewer inspects a patch by examining the changed test Code before the changed Production Code. Although this practice has been mentioned positively by practitioners in informal literature and interviews, there is no systematic knowledge of its effects, prevalence, problems, and advantages. In this paper, we aim at empirically understanding whether this practice has an effect on Code review effectiveness and how developers' perceive TDR. We conduct (i) a controlled experiment with 93 developers that perform more than 150 reviews, and (ii) 9 semi-structured interviews and a survey with 103 respondents to gather information on how TDR is perceived. Key results from the experiment show that developers adopting TDR find the same proportion of defects in Production Code, but more in test Code, at the expenses of fewer maintainability issues in Production Code. Furthermore, we found that most developers prefer to review Production Code as they deem it more critical and tests should follow from it. Moreover, general poor test Code quality and no tool support hinder the adoption of TDR. Public preprint: [https: //doi.org/10.5281/zenodo.2551217], data and materials: [https:// doi.org/10.5281/zenodo.2553139].

  • ICSE - Test-driven Code review: an empirical study
    2019 IEEE ACM 41st International Conference on Software Engineering (ICSE), 2019
    Co-Authors: Davide Spadini, Fabio Palomba, Tobias Baum, Stefan Hanenberg, Magiel Bruntink, Alberto Bacchelli
    Abstract:

    Test-Driven Code Review (tdr) is a Code review practice in which a reviewer inspects a patch by examining the changed test Code before the changed Production Code. Although this practice has been mentioned positively by practitioners in informal literature and interviews, there is no systematic knowledge of its effects, prevalence, problems, and advantages. In this paper, we aim at empirically understanding whether this practice has an effect on Code review effectiveness and how developers' perceive TDR. We conduct (i) a controlled experiment with 93 developers that perform more than 150 reviews, and (ii) 9 semi-structured interviews and a survey with 103 respondents to gather information on how TDR is perceived. Key results from the experiment show that developers adopting tdr find the same proportion of defects in Production Code, but more in test Code, at the expenses of fewer maintainability issues in Production Code. Furthermore, we found that most developers prefer to review Production Code as they deem it more critical and tests should follow from it. Moreover, general poor test Code quality and no tool support hinder the adoption of tdr. Public preprint: [https://doi.org/10.5281/zenodo.2551217], data and materials: [https://doi.org/10.5281/zenodo.2553139].

  • ICSME - On the Relation of Test Smells to Software Code Quality
    2018 IEEE International Conference on Software Maintenance and Evolution (ICSME), 2018
    Co-Authors: Davide Spadini, Andy Zaidman, Fabio Palomba, Magiel Bruntink, Alberto Bacchelli
    Abstract:

    Test smells are sub-optimal design choices in the implementation of test Code. As reported by recent studies, their presence might not only negatively affect the comprehension of test suites but can also lead to test cases being less effective in finding bugs in Production Code. Although significant steps toward understanding test smells, there is still a notable absence of studies assessing their association with software quality. In this paper, we investigate the relationship between the presence of test smells and the change-and defect-proneness of test Code, as well as the defect-proneness of the tested Production Code. To this aim, we collect data on 221 releases of ten software systems and we analyze more than a million test cases to investigate the association of six test smells and their co-occurrence with software quality. Key results of our study include:(i) tests with smells are more change-and defect-prone, (ii) "Indirect Testing", "Eager Test", and "Assertion Roulette" are the most significant smells for change-proneness and, (iii) Production Code is more defect-prone when tested by smelly tests.

  • On the Relation of Test Smells to Software Code Quality
    2018 IEEE International Conference on Software Maintenance and Evolution (ICSME), 2018
    Co-Authors: Davide Spadini, Andy Zaidman, Fabio Palomba, Magiel Bruntink, Alberto Bacchelli
    Abstract:

    Test smells are sub-optimal design choices in the implementation of test Code. As reported by recent studies, their presence might not only negatively affect the comprehension of test suites but can also lead to test cases being less effective in finding bugs in Production Code. Although significant steps toward understanding test smells, there is still a notable absence of studies assessing their association with software quality. In this paper, we investigate the relationship between the presence of test smells and the change-and defect-proneness of test Code, as well as the defect-proneness of the tested Production Code. To this aim, we collect data on 221 releases of ten software systems and we analyze more than a million test cases to investigate the association of six test smells and their co-occurrence with software quality. Key results of our study include:(i) tests with smells are more change-and defect-prone, (ii) "Indirect Testing", "Eager Test", and "Assertion Roulette" are the most significant smells for change-proneness and, (iii) Production Code is more defect-prone when tested by smelly tests.

Davide Spadini - One of the best experts on this subject based on the ideXlab platform.

  • Mock objects for testing java systems
    Empirical Software Engineering, 2019
    Co-Authors: Davide Spadini, Magiel Bruntink, Mauricio Aniche, Alberto Bacchelli
    Abstract:

    When testing software artifacts that have several dependencies, one has the possibility of either instantiating these dependencies or using mock objects to simulate the dependencies’ expected behavior. Even though recent quantitative studies showed that mock objects are widely used both in open source and proprietary projects, scientific knowledge is still lacking on how and why practitioners use mocks. An empirical understanding of the situations where developers have (and have not) been applying mocks, as well as the impact of such decisions in terms of coupling and software evolution can be used to help practitioners adapt and improve their future usage. To this aim, we study the usage of mock objects in three OSS projects and one industrial system. More specifically, we manually analyze more than 2,000 mock usages. We then discuss our findings with developers from these systems, and identify practices, rationales, and challenges. These results are supported by a structured survey with more than 100 professionals. Finally, we manually analyze how the usage of mock objects in test Code evolve over time as well as the impact of their usage on the coupling between test and Production Code. Our study reveals that the usage of mocks is highly dependent on the responsibility and the architectural concern of the class. Developers report to frequently mock dependencies that make testing difficult (e.g., infrastructure-related dependencies) and to not mock classes that encapsulate domain concepts/rules of the system. Among the key challenges, developers report that maintaining the behavior of the mock compatible with the behavior of original class is hard and that mocking increases the coupling between the test and the Production Code. Their perceptions are confirmed by our data, as we observed that mocks mostly exist since the very first version of the test class, and that they tend to stay there for its whole lifetime, and that changes in Production Code often force the test Code to also change.

  • Test-Driven Code Review: An Empirical Study
    2019 IEEE ACM 41st International Conference on Software Engineering (ICSE), 2019
    Co-Authors: Davide Spadini, Fabio Palomba, Tobias Baum, Stefan Hanenberg, Magiel Bruntink, Alberto Bacchelli
    Abstract:

    Test-Driven Code Review (TDR) is a Code review practice in which a reviewer inspects a patch by examining the changed test Code before the changed Production Code. Although this practice has been mentioned positively by practitioners in informal literature and interviews, there is no systematic knowledge of its effects, prevalence, problems, and advantages. In this paper, we aim at empirically understanding whether this practice has an effect on Code review effectiveness and how developers' perceive TDR. We conduct (i) a controlled experiment with 93 developers that perform more than 150 reviews, and (ii) 9 semi-structured interviews and a survey with 103 respondents to gather information on how TDR is perceived. Key results from the experiment show that developers adopting TDR find the same proportion of defects in Production Code, but more in test Code, at the expenses of fewer maintainability issues in Production Code. Furthermore, we found that most developers prefer to review Production Code as they deem it more critical and tests should follow from it. Moreover, general poor test Code quality and no tool support hinder the adoption of TDR. Public preprint: [https: //doi.org/10.5281/zenodo.2551217], data and materials: [https:// doi.org/10.5281/zenodo.2553139].

  • ICSE - Test-driven Code review: an empirical study
    2019 IEEE ACM 41st International Conference on Software Engineering (ICSE), 2019
    Co-Authors: Davide Spadini, Fabio Palomba, Tobias Baum, Stefan Hanenberg, Magiel Bruntink, Alberto Bacchelli
    Abstract:

    Test-Driven Code Review (tdr) is a Code review practice in which a reviewer inspects a patch by examining the changed test Code before the changed Production Code. Although this practice has been mentioned positively by practitioners in informal literature and interviews, there is no systematic knowledge of its effects, prevalence, problems, and advantages. In this paper, we aim at empirically understanding whether this practice has an effect on Code review effectiveness and how developers' perceive TDR. We conduct (i) a controlled experiment with 93 developers that perform more than 150 reviews, and (ii) 9 semi-structured interviews and a survey with 103 respondents to gather information on how TDR is perceived. Key results from the experiment show that developers adopting tdr find the same proportion of defects in Production Code, but more in test Code, at the expenses of fewer maintainability issues in Production Code. Furthermore, we found that most developers prefer to review Production Code as they deem it more critical and tests should follow from it. Moreover, general poor test Code quality and no tool support hinder the adoption of tdr. Public preprint: [https://doi.org/10.5281/zenodo.2551217], data and materials: [https://doi.org/10.5281/zenodo.2553139].

  • ESEC/SIGSOFT FSE - Practices and tools for better software testing
    Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering - , 2018
    Co-Authors: Davide Spadini
    Abstract:

    Automated testing (hereafter referred to as just `testing') has become an essential process for improving the quality of software systems. In fact, testing can help to point out defects and to ensure that Production Code is robust under many usage conditions. However, writing and maintaining high-quality test Code is challenging and frequently considered of secondary importance. Managers, as well as developers, do not treat test Code as equally important as Production Code, and this behaviour could lead to poor test Code quality, and in the future to defect-prone Production Code. The goal of my research is to bring awareness to developers on the effect of poor testing, as well as helping them in writing better test Code. To this aim, I am working on 2 different perspectives: (1) studying best practices on software testing, identifying problems and challenges of current approaches, and (2) building new tools that better support the writing of test Code, that tackle the issues we discovered with previous studies. Pre-print: https://doi.org/10.5281/zenodo.1411241

  • ICSME - On the Relation of Test Smells to Software Code Quality
    2018 IEEE International Conference on Software Maintenance and Evolution (ICSME), 2018
    Co-Authors: Davide Spadini, Andy Zaidman, Fabio Palomba, Magiel Bruntink, Alberto Bacchelli
    Abstract:

    Test smells are sub-optimal design choices in the implementation of test Code. As reported by recent studies, their presence might not only negatively affect the comprehension of test suites but can also lead to test cases being less effective in finding bugs in Production Code. Although significant steps toward understanding test smells, there is still a notable absence of studies assessing their association with software quality. In this paper, we investigate the relationship between the presence of test smells and the change-and defect-proneness of test Code, as well as the defect-proneness of the tested Production Code. To this aim, we collect data on 221 releases of ten software systems and we analyze more than a million test cases to investigate the association of six test smells and their co-occurrence with software quality. Key results of our study include:(i) tests with smells are more change-and defect-prone, (ii) "Indirect Testing", "Eager Test", and "Assertion Roulette" are the most significant smells for change-proneness and, (iii) Production Code is more defect-prone when tested by smelly tests.

Andy Zaidman - One of the best experts on this subject based on the ideXlab platform.

  • Software Evolution - On the Interplay Between Software Testing and Evolution and its Effect on Program Comprehension
    Software Evolution, 2020
    Co-Authors: Leon Moonen, Andy Zaidman, Arie Van Deursen, Magiel Bruntink
    Abstract:

    We know software evolution to be inevitable if the system is to survive in the long-term. Equally well-understood is the necessity of having a good test suite available in order to (1) ensure the quality of the current state of the software system and (2) to ease future change. In that light, this chapter explores the interplay that exists between software testing and software evolution, because as tests ease software evolution by offering a safety net against unwanted change, they can equally be experienced as a burden because they are subject to the very same forces of software evolution themselves.In particular, in this chapter, we describe how typical refactorings of Production Code can invalidate tests, how test Code can (structurally) be improved by applying specialized test refactorings. Building upon these concepts, we introduce “test-driven refactoring”, or refactorings of Production Code that are induced by the (re)structuring of the tests. We also report on typical source Code design metrics that can serve as indicators for testability. To conclude, we present a research agenda that contains pointers to—as yet—unexplored research topics in the domain of testing.

  • ICSME - On the Relation of Test Smells to Software Code Quality
    2018 IEEE International Conference on Software Maintenance and Evolution (ICSME), 2018
    Co-Authors: Davide Spadini, Andy Zaidman, Fabio Palomba, Magiel Bruntink, Alberto Bacchelli
    Abstract:

    Test smells are sub-optimal design choices in the implementation of test Code. As reported by recent studies, their presence might not only negatively affect the comprehension of test suites but can also lead to test cases being less effective in finding bugs in Production Code. Although significant steps toward understanding test smells, there is still a notable absence of studies assessing their association with software quality. In this paper, we investigate the relationship between the presence of test smells and the change-and defect-proneness of test Code, as well as the defect-proneness of the tested Production Code. To this aim, we collect data on 221 releases of ten software systems and we analyze more than a million test cases to investigate the association of six test smells and their co-occurrence with software quality. Key results of our study include:(i) tests with smells are more change-and defect-prone, (ii) "Indirect Testing", "Eager Test", and "Assertion Roulette" are the most significant smells for change-proneness and, (iii) Production Code is more defect-prone when tested by smelly tests.

  • On the Relation of Test Smells to Software Code Quality
    2018 IEEE International Conference on Software Maintenance and Evolution (ICSME), 2018
    Co-Authors: Davide Spadini, Andy Zaidman, Fabio Palomba, Magiel Bruntink, Alberto Bacchelli
    Abstract:

    Test smells are sub-optimal design choices in the implementation of test Code. As reported by recent studies, their presence might not only negatively affect the comprehension of test suites but can also lead to test cases being less effective in finding bugs in Production Code. Although significant steps toward understanding test smells, there is still a notable absence of studies assessing their association with software quality. In this paper, we investigate the relationship between the presence of test smells and the change-and defect-proneness of test Code, as well as the defect-proneness of the tested Production Code. To this aim, we collect data on 221 releases of ten software systems and we analyze more than a million test cases to investigate the association of six test smells and their co-occurrence with software quality. Key results of our study include:(i) tests with smells are more change-and defect-prone, (ii) "Indirect Testing", "Eager Test", and "Assertion Roulette" are the most significant smells for change-proneness and, (iii) Production Code is more defect-prone when tested by smelly tests.

  • SCAM - Studying Fine-Grained Co-evolution Patterns of Production and Test Code
    2014 IEEE 14th International Working Conference on Source Code Analysis and Manipulation, 2014
    Co-Authors: Cosmin Marsavina, Daniele Romano, Andy Zaidman
    Abstract:

    Numerous software development practices suggest updating the test Code whenever the Production Code is changed. However, previous studies have shown that co-evolving test and Production Code is generally a difficult task that needs to be thoroughly investigated. In this paper we perform a study that, following a mixed methods approach, investigates fine-grained co-evolution patterns of Production and test Code. First, we mine fine-grained changes from the evolution of 5 open-source systems. Then, we use an association rule mining algorithm to generate the co-evolution patterns. Finally, we interpret the obtained patterns by performing a qualitative analysis. The results show 6 co-evolution patterns and provide insights into their appearance along the history of the analyzed software systems. Besides providing a better understanding of how test Code evolves, these findings also help identify gaps in the test Code thereby assisting both researchers and developers.

  • Studying Fine-Grained Co-evolution Patterns of Production and Test Code
    2014 IEEE 14th International Working Conference on Source Code Analysis and Manipulation, 2014
    Co-Authors: Cosmin Marsavina, Daniele Romano, Andy Zaidman
    Abstract:

    Numerous software development practices suggest updating the test Code whenever the Production Code is changed. However, previous studies have shown that co-evolving test and Production Code is generally a difficult task that needs to be thoroughly investigated. In this paper we perform a study that, following a mixed methods approach, investigates fine-grained co-evolution patterns of Production and test Code. First, we mine fine-grained changes from the evolution of 5 open-source systems. Then, we use an association rule mining algorithm to generate the co-evolution patterns. Finally, we interpret the obtained patterns by performing a qualitative analysis. The results show 6 co-evolution patterns and provide insights into their appearance along the history of the analyzed software systems. Besides providing a better understanding of how test Code evolves, these findings also help identify gaps in the test Code thereby assisting both researchers and developers.

Magiel Bruntink - One of the best experts on this subject based on the ideXlab platform.

  • Software Evolution - On the Interplay Between Software Testing and Evolution and its Effect on Program Comprehension
    Software Evolution, 2020
    Co-Authors: Leon Moonen, Andy Zaidman, Arie Van Deursen, Magiel Bruntink
    Abstract:

    We know software evolution to be inevitable if the system is to survive in the long-term. Equally well-understood is the necessity of having a good test suite available in order to (1) ensure the quality of the current state of the software system and (2) to ease future change. In that light, this chapter explores the interplay that exists between software testing and software evolution, because as tests ease software evolution by offering a safety net against unwanted change, they can equally be experienced as a burden because they are subject to the very same forces of software evolution themselves.In particular, in this chapter, we describe how typical refactorings of Production Code can invalidate tests, how test Code can (structurally) be improved by applying specialized test refactorings. Building upon these concepts, we introduce “test-driven refactoring”, or refactorings of Production Code that are induced by the (re)structuring of the tests. We also report on typical source Code design metrics that can serve as indicators for testability. To conclude, we present a research agenda that contains pointers to—as yet—unexplored research topics in the domain of testing.

  • Mock objects for testing java systems
    Empirical Software Engineering, 2019
    Co-Authors: Davide Spadini, Magiel Bruntink, Mauricio Aniche, Alberto Bacchelli
    Abstract:

    When testing software artifacts that have several dependencies, one has the possibility of either instantiating these dependencies or using mock objects to simulate the dependencies’ expected behavior. Even though recent quantitative studies showed that mock objects are widely used both in open source and proprietary projects, scientific knowledge is still lacking on how and why practitioners use mocks. An empirical understanding of the situations where developers have (and have not) been applying mocks, as well as the impact of such decisions in terms of coupling and software evolution can be used to help practitioners adapt and improve their future usage. To this aim, we study the usage of mock objects in three OSS projects and one industrial system. More specifically, we manually analyze more than 2,000 mock usages. We then discuss our findings with developers from these systems, and identify practices, rationales, and challenges. These results are supported by a structured survey with more than 100 professionals. Finally, we manually analyze how the usage of mock objects in test Code evolve over time as well as the impact of their usage on the coupling between test and Production Code. Our study reveals that the usage of mocks is highly dependent on the responsibility and the architectural concern of the class. Developers report to frequently mock dependencies that make testing difficult (e.g., infrastructure-related dependencies) and to not mock classes that encapsulate domain concepts/rules of the system. Among the key challenges, developers report that maintaining the behavior of the mock compatible with the behavior of original class is hard and that mocking increases the coupling between the test and the Production Code. Their perceptions are confirmed by our data, as we observed that mocks mostly exist since the very first version of the test class, and that they tend to stay there for its whole lifetime, and that changes in Production Code often force the test Code to also change.

  • Test-Driven Code Review: An Empirical Study
    2019 IEEE ACM 41st International Conference on Software Engineering (ICSE), 2019
    Co-Authors: Davide Spadini, Fabio Palomba, Tobias Baum, Stefan Hanenberg, Magiel Bruntink, Alberto Bacchelli
    Abstract:

    Test-Driven Code Review (TDR) is a Code review practice in which a reviewer inspects a patch by examining the changed test Code before the changed Production Code. Although this practice has been mentioned positively by practitioners in informal literature and interviews, there is no systematic knowledge of its effects, prevalence, problems, and advantages. In this paper, we aim at empirically understanding whether this practice has an effect on Code review effectiveness and how developers' perceive TDR. We conduct (i) a controlled experiment with 93 developers that perform more than 150 reviews, and (ii) 9 semi-structured interviews and a survey with 103 respondents to gather information on how TDR is perceived. Key results from the experiment show that developers adopting TDR find the same proportion of defects in Production Code, but more in test Code, at the expenses of fewer maintainability issues in Production Code. Furthermore, we found that most developers prefer to review Production Code as they deem it more critical and tests should follow from it. Moreover, general poor test Code quality and no tool support hinder the adoption of TDR. Public preprint: [https: //doi.org/10.5281/zenodo.2551217], data and materials: [https:// doi.org/10.5281/zenodo.2553139].

  • ICSE - Test-driven Code review: an empirical study
    2019 IEEE ACM 41st International Conference on Software Engineering (ICSE), 2019
    Co-Authors: Davide Spadini, Fabio Palomba, Tobias Baum, Stefan Hanenberg, Magiel Bruntink, Alberto Bacchelli
    Abstract:

    Test-Driven Code Review (tdr) is a Code review practice in which a reviewer inspects a patch by examining the changed test Code before the changed Production Code. Although this practice has been mentioned positively by practitioners in informal literature and interviews, there is no systematic knowledge of its effects, prevalence, problems, and advantages. In this paper, we aim at empirically understanding whether this practice has an effect on Code review effectiveness and how developers' perceive TDR. We conduct (i) a controlled experiment with 93 developers that perform more than 150 reviews, and (ii) 9 semi-structured interviews and a survey with 103 respondents to gather information on how TDR is perceived. Key results from the experiment show that developers adopting tdr find the same proportion of defects in Production Code, but more in test Code, at the expenses of fewer maintainability issues in Production Code. Furthermore, we found that most developers prefer to review Production Code as they deem it more critical and tests should follow from it. Moreover, general poor test Code quality and no tool support hinder the adoption of tdr. Public preprint: [https://doi.org/10.5281/zenodo.2551217], data and materials: [https://doi.org/10.5281/zenodo.2553139].

  • ICSME - On the Relation of Test Smells to Software Code Quality
    2018 IEEE International Conference on Software Maintenance and Evolution (ICSME), 2018
    Co-Authors: Davide Spadini, Andy Zaidman, Fabio Palomba, Magiel Bruntink, Alberto Bacchelli
    Abstract:

    Test smells are sub-optimal design choices in the implementation of test Code. As reported by recent studies, their presence might not only negatively affect the comprehension of test suites but can also lead to test cases being less effective in finding bugs in Production Code. Although significant steps toward understanding test smells, there is still a notable absence of studies assessing their association with software quality. In this paper, we investigate the relationship between the presence of test smells and the change-and defect-proneness of test Code, as well as the defect-proneness of the tested Production Code. To this aim, we collect data on 221 releases of ten software systems and we analyze more than a million test cases to investigate the association of six test smells and their co-occurrence with software quality. Key results of our study include:(i) tests with smells are more change-and defect-prone, (ii) "Indirect Testing", "Eager Test", and "Assertion Roulette" are the most significant smells for change-proneness and, (iii) Production Code is more defect-prone when tested by smelly tests.

David Binkley - One of the best experts on this subject based on the ideXlab platform.

  • An Exploratory Study of the Relationship Between Software Test Smells and Fault-Proneness
    IEEE Access, 2019
    Co-Authors: Abdallah Qusef, Mahmoud O. Elish, David Binkley
    Abstract:

    Test smells have been defined as indicators of poorly designed tests. Their presence negatively affects the maintainability of a test suite as well as the Production Code. Despite the many studies that address the negative impacts of various test smells, until now there has been no empirical evidence considering the relation between the evolution of test smells and that of faults in the Production Code. This paper presents such evidence. It presents a case study of data collected from 28 versions of Apache Ant that include a total of 4,447 unit tests. Three key results arise from the data. First, the absolute number of test smells increases as Apache Ant evolves. Second, some test smells are positively correlated with the existence of faults in the Production Code. Finally, our results show that it is possible to predict faults in the Production Code based on the existence of test smells in the Code's unit tests. In addition, the resulting prediction model is more accurate at predicting high-severity faults than low-severity faults. This is an important result as it enables an engineer to focus preventative maintenance efforts, applied to the Production Code, using test smells found in the unit tests.