Automated Refactorings

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 729 Experts worldwide ranked by ideXlab platform

Ralph E Johnson - One of the best experts on this subject based on the ideXlab platform.

  • Differential precondition checking: a language-independent, reusable analysis for refactoring engines
    Automated Software Engineering, 2016
    Co-Authors: Jeffrey L. Overbey, Ralph E Johnson, Munawar Hafiz
    Abstract:

    One of the most difficult parts of building Automated Refactorings is ensuring that they preserve behavior. This paper proposes a new technique to check for behavior preservation; we call this technique differential precondition checking. It is simple yet expressive enough to implement the most common Refactorings, and the core algorithm runs in linear time. However, the main advantage is that a differential precondition checker can be placed in a library and reused in refactoring tools for many different languages; the core algorithm can be implemented in a way that is completely language independent. We have implemented a differential precondition checker and used it in refactoring tools for Fortran (Photran), PHP, and BC.

  • a comparative study of manual and Automated Refactorings
    European Conference on Object-Oriented Programming, 2013
    Co-Authors: Stas Negara, Mohsen Vakilian, Nicholas Chen, Ralph E Johnson, Danny Dig
    Abstract:

    Despite the enormous success that manual and Automated refactoring has enjoyed during the last decade, we know little about the practice of refactoring. Understanding the refactoring practice is important for developers, refactoring tool builders, and researchers. Many previous approaches to study Refactorings are based on comparing code snapshots, which is imprecise, incomplete, and does not allow answering research questions that involve time or compare manual and Automated refactoring. We present the first extended empirical study that considers both manual and Automated refactoring. This study is enabled by our algorithm, which infers Refactorings from continuous changes. We implemented and applied this algorithm to the code evolution data collected from 23 developers working in their natural environment for 1,520 hours. Using a corpus of 5,371 Refactorings, we reveal several new facts about manual and Automated Refactorings. For example, more than half of the Refactorings were performed manually. The popularity of Automated and manual Refactorings differs. More than one third of the Refactorings performed by developers are clustered in time. On average, 30% of the performed Refactorings do not reach the Version Control System.

  • ECOOP - A comparative study of manual and Automated Refactorings
    ECOOP 2013 – Object-Oriented Programming, 2013
    Co-Authors: Stas Negara, Mohsen Vakilian, Nicholas Chen, Ralph E Johnson, Danny Dig
    Abstract:

    Despite the enormous success that manual and Automated refactoring has enjoyed during the last decade, we know little about the practice of refactoring. Understanding the refactoring practice is important for developers, refactoring tool builders, and researchers. Many previous approaches to study Refactorings are based on comparing code snapshots, which is imprecise, incomplete, and does not allow answering research questions that involve time or compare manual and Automated refactoring. We present the first extended empirical study that considers both manual and Automated refactoring. This study is enabled by our algorithm, which infers Refactorings from continuous changes. We implemented and applied this algorithm to the code evolution data collected from 23 developers working in their natural environment for 1,520 hours. Using a corpus of 5,371 Refactorings, we reveal several new facts about manual and Automated Refactorings. For example, more than half of the Refactorings were performed manually. The popularity of Automated and manual Refactorings differs. More than one third of the Refactorings performed by developers are clustered in time. On average, 30% of the performed Refactorings do not reach the Version Control System.

  • ECOOP - A compositional paradigm of automating Refactorings
    ECOOP 2013 – Object-Oriented Programming, 2013
    Co-Authors: Mohsen Vakilian, Nicholas Chen, Stas Negara, Roshanak Zilouchian Moghaddam, Ralph E Johnson
    Abstract:

    Recent studies suggest that programmers greatly underuse refactoring tools, especially for complex Refactorings. Complex Refactorings tend to be tedious and error-prone to perform by hand. To promote the use of refactoring tools for complex changes, we propose a new paradigm for automating Refactorings called compositional refactoring. The key idea is to perform small, predictable changes using a tool and manually compose them into complex changes. This paradigm trades off some level of automation by higher predictability and control. We show that this paradigm is natural, because our analysis of programmers' use of the Eclipse refactoring tool in the wild shows that they frequently batch and compose Automated Refactorings. We then show that programmers are receptive to this new paradigm through a survey of 100 respondents. Finally, we show that the compositional paradigm is effective through a controlled study of 13 professional programmers, comparing this paradigm to the existing wizard-based one.

  • using continuous code change analysis to understand the practice of refactoring
    2012
    Co-Authors: Stas Negara, Mohsen Vakilian, Nicholas Chen, Ralph E Johnson
    Abstract:

    Despite the enormous success that manual and Automated refactoring has enjoyed during the last decade, we know little about the practice of refactoring. Understanding the refactoring practice is important for developers, refactoring tool builders, and researchers. Many previous approaches to study Refactorings are based on comparing code snapshots, which is imprecise, incomplete, and does not allow answering research questions that involve time or compare manual and Automated refactoring. We present the first extended empirical study that considers both manual and Automated refactoring. This study is enabled by our algorithm, which infers Refactorings from continuous changes. We implemented and applied this algorithm to the code evolution data collected from 23 developers working in their natural environment for 1,520 hours. Using a corpus of 5,371 Refactorings, we reveal several new facts about manual and Automated Refactorings. For example, more than a half of the Refactorings were performed manually. The popularity of Automated and manual Refactorings differs. More than one third of the Refactorings performed by developers are clustered. For some refactoring kinds, up to 64% of performed Refactorings do not reach the Version Control System.

Stas Negara - One of the best experts on this subject based on the ideXlab platform.

  • a comparative study of manual and Automated Refactorings
    European Conference on Object-Oriented Programming, 2013
    Co-Authors: Stas Negara, Mohsen Vakilian, Nicholas Chen, Ralph E Johnson, Danny Dig
    Abstract:

    Despite the enormous success that manual and Automated refactoring has enjoyed during the last decade, we know little about the practice of refactoring. Understanding the refactoring practice is important for developers, refactoring tool builders, and researchers. Many previous approaches to study Refactorings are based on comparing code snapshots, which is imprecise, incomplete, and does not allow answering research questions that involve time or compare manual and Automated refactoring. We present the first extended empirical study that considers both manual and Automated refactoring. This study is enabled by our algorithm, which infers Refactorings from continuous changes. We implemented and applied this algorithm to the code evolution data collected from 23 developers working in their natural environment for 1,520 hours. Using a corpus of 5,371 Refactorings, we reveal several new facts about manual and Automated Refactorings. For example, more than half of the Refactorings were performed manually. The popularity of Automated and manual Refactorings differs. More than one third of the Refactorings performed by developers are clustered in time. On average, 30% of the performed Refactorings do not reach the Version Control System.

  • ECOOP - A comparative study of manual and Automated Refactorings
    ECOOP 2013 – Object-Oriented Programming, 2013
    Co-Authors: Stas Negara, Mohsen Vakilian, Nicholas Chen, Ralph E Johnson, Danny Dig
    Abstract:

    Despite the enormous success that manual and Automated refactoring has enjoyed during the last decade, we know little about the practice of refactoring. Understanding the refactoring practice is important for developers, refactoring tool builders, and researchers. Many previous approaches to study Refactorings are based on comparing code snapshots, which is imprecise, incomplete, and does not allow answering research questions that involve time or compare manual and Automated refactoring. We present the first extended empirical study that considers both manual and Automated refactoring. This study is enabled by our algorithm, which infers Refactorings from continuous changes. We implemented and applied this algorithm to the code evolution data collected from 23 developers working in their natural environment for 1,520 hours. Using a corpus of 5,371 Refactorings, we reveal several new facts about manual and Automated Refactorings. For example, more than half of the Refactorings were performed manually. The popularity of Automated and manual Refactorings differs. More than one third of the Refactorings performed by developers are clustered in time. On average, 30% of the performed Refactorings do not reach the Version Control System.

  • ECOOP - A compositional paradigm of automating Refactorings
    ECOOP 2013 – Object-Oriented Programming, 2013
    Co-Authors: Mohsen Vakilian, Nicholas Chen, Stas Negara, Roshanak Zilouchian Moghaddam, Ralph E Johnson
    Abstract:

    Recent studies suggest that programmers greatly underuse refactoring tools, especially for complex Refactorings. Complex Refactorings tend to be tedious and error-prone to perform by hand. To promote the use of refactoring tools for complex changes, we propose a new paradigm for automating Refactorings called compositional refactoring. The key idea is to perform small, predictable changes using a tool and manually compose them into complex changes. This paradigm trades off some level of automation by higher predictability and control. We show that this paradigm is natural, because our analysis of programmers' use of the Eclipse refactoring tool in the wild shows that they frequently batch and compose Automated Refactorings. We then show that programmers are receptive to this new paradigm through a survey of 100 respondents. Finally, we show that the compositional paradigm is effective through a controlled study of 13 professional programmers, comparing this paradigm to the existing wizard-based one.

  • using continuous code change analysis to understand the practice of refactoring
    2012
    Co-Authors: Stas Negara, Mohsen Vakilian, Nicholas Chen, Ralph E Johnson
    Abstract:

    Despite the enormous success that manual and Automated refactoring has enjoyed during the last decade, we know little about the practice of refactoring. Understanding the refactoring practice is important for developers, refactoring tool builders, and researchers. Many previous approaches to study Refactorings are based on comparing code snapshots, which is imprecise, incomplete, and does not allow answering research questions that involve time or compare manual and Automated refactoring. We present the first extended empirical study that considers both manual and Automated refactoring. This study is enabled by our algorithm, which infers Refactorings from continuous changes. We implemented and applied this algorithm to the code evolution data collected from 23 developers working in their natural environment for 1,520 hours. Using a corpus of 5,371 Refactorings, we reveal several new facts about manual and Automated Refactorings. For example, more than a half of the Refactorings were performed manually. The popularity of Automated and manual Refactorings differs. More than one third of the Refactorings performed by developers are clustered. For some refactoring kinds, up to 64% of performed Refactorings do not reach the Version Control System.

  • use disuse and misuse of Automated Refactorings
    International Conference on Software Engineering, 2012
    Co-Authors: Mohsen Vakilian, Nicholas Chen, Stas Negara, Balaji Ambresh Rajkumar, Brian P Bailey, Ralph E Johnson
    Abstract:

    Though refactoring tools have been available for more than a decade, research has shown that programmers underutilize such tools. However, little is known about why programmers do not take advantage of these tools. We have conducted a field study on programmers in their natural settings working on their code. As a result, we collected a set of interaction data from about 1268 hours of programming using our minimally intrusive data collectors. Our quantitative data show that programmers prefer lightweight methods of invoking Refactorings, usually perform small changes using the refactoring tool, proceed with an Automated refactoring even when it may change the behavior of the program, and rarely preview the Automated Refactorings. We also interviewed nine of our participants to provide deeper insight about the patterns that we observed in the behavioral data. We found that programmers use predictable Automated Refactorings even if they have rare bugs or change the behavior of the program. This paper reports some of the factors that affect the use of Automated Refactorings such as invocation method, awareness, naming, trust, and predictability and the major mismatches between programmers' expectations and Automated Refactorings. The results of this work contribute to producing more effective tools for refactoring complex software.

Mohsen Vakilian - One of the best experts on this subject based on the ideXlab platform.

  • a comparative study of manual and Automated Refactorings
    European Conference on Object-Oriented Programming, 2013
    Co-Authors: Stas Negara, Mohsen Vakilian, Nicholas Chen, Ralph E Johnson, Danny Dig
    Abstract:

    Despite the enormous success that manual and Automated refactoring has enjoyed during the last decade, we know little about the practice of refactoring. Understanding the refactoring practice is important for developers, refactoring tool builders, and researchers. Many previous approaches to study Refactorings are based on comparing code snapshots, which is imprecise, incomplete, and does not allow answering research questions that involve time or compare manual and Automated refactoring. We present the first extended empirical study that considers both manual and Automated refactoring. This study is enabled by our algorithm, which infers Refactorings from continuous changes. We implemented and applied this algorithm to the code evolution data collected from 23 developers working in their natural environment for 1,520 hours. Using a corpus of 5,371 Refactorings, we reveal several new facts about manual and Automated Refactorings. For example, more than half of the Refactorings were performed manually. The popularity of Automated and manual Refactorings differs. More than one third of the Refactorings performed by developers are clustered in time. On average, 30% of the performed Refactorings do not reach the Version Control System.

  • ECOOP - A comparative study of manual and Automated Refactorings
    ECOOP 2013 – Object-Oriented Programming, 2013
    Co-Authors: Stas Negara, Mohsen Vakilian, Nicholas Chen, Ralph E Johnson, Danny Dig
    Abstract:

    Despite the enormous success that manual and Automated refactoring has enjoyed during the last decade, we know little about the practice of refactoring. Understanding the refactoring practice is important for developers, refactoring tool builders, and researchers. Many previous approaches to study Refactorings are based on comparing code snapshots, which is imprecise, incomplete, and does not allow answering research questions that involve time or compare manual and Automated refactoring. We present the first extended empirical study that considers both manual and Automated refactoring. This study is enabled by our algorithm, which infers Refactorings from continuous changes. We implemented and applied this algorithm to the code evolution data collected from 23 developers working in their natural environment for 1,520 hours. Using a corpus of 5,371 Refactorings, we reveal several new facts about manual and Automated Refactorings. For example, more than half of the Refactorings were performed manually. The popularity of Automated and manual Refactorings differs. More than one third of the Refactorings performed by developers are clustered in time. On average, 30% of the performed Refactorings do not reach the Version Control System.

  • ECOOP - A compositional paradigm of automating Refactorings
    ECOOP 2013 – Object-Oriented Programming, 2013
    Co-Authors: Mohsen Vakilian, Nicholas Chen, Stas Negara, Roshanak Zilouchian Moghaddam, Ralph E Johnson
    Abstract:

    Recent studies suggest that programmers greatly underuse refactoring tools, especially for complex Refactorings. Complex Refactorings tend to be tedious and error-prone to perform by hand. To promote the use of refactoring tools for complex changes, we propose a new paradigm for automating Refactorings called compositional refactoring. The key idea is to perform small, predictable changes using a tool and manually compose them into complex changes. This paradigm trades off some level of automation by higher predictability and control. We show that this paradigm is natural, because our analysis of programmers' use of the Eclipse refactoring tool in the wild shows that they frequently batch and compose Automated Refactorings. We then show that programmers are receptive to this new paradigm through a survey of 100 respondents. Finally, we show that the compositional paradigm is effective through a controlled study of 13 professional programmers, comparing this paradigm to the existing wizard-based one.

  • using continuous code change analysis to understand the practice of refactoring
    2012
    Co-Authors: Stas Negara, Mohsen Vakilian, Nicholas Chen, Ralph E Johnson
    Abstract:

    Despite the enormous success that manual and Automated refactoring has enjoyed during the last decade, we know little about the practice of refactoring. Understanding the refactoring practice is important for developers, refactoring tool builders, and researchers. Many previous approaches to study Refactorings are based on comparing code snapshots, which is imprecise, incomplete, and does not allow answering research questions that involve time or compare manual and Automated refactoring. We present the first extended empirical study that considers both manual and Automated refactoring. This study is enabled by our algorithm, which infers Refactorings from continuous changes. We implemented and applied this algorithm to the code evolution data collected from 23 developers working in their natural environment for 1,520 hours. Using a corpus of 5,371 Refactorings, we reveal several new facts about manual and Automated Refactorings. For example, more than a half of the Refactorings were performed manually. The popularity of Automated and manual Refactorings differs. More than one third of the Refactorings performed by developers are clustered. For some refactoring kinds, up to 64% of performed Refactorings do not reach the Version Control System.

  • use disuse and misuse of Automated Refactorings
    International Conference on Software Engineering, 2012
    Co-Authors: Mohsen Vakilian, Nicholas Chen, Stas Negara, Balaji Ambresh Rajkumar, Brian P Bailey, Ralph E Johnson
    Abstract:

    Though refactoring tools have been available for more than a decade, research has shown that programmers underutilize such tools. However, little is known about why programmers do not take advantage of these tools. We have conducted a field study on programmers in their natural settings working on their code. As a result, we collected a set of interaction data from about 1268 hours of programming using our minimally intrusive data collectors. Our quantitative data show that programmers prefer lightweight methods of invoking Refactorings, usually perform small changes using the refactoring tool, proceed with an Automated refactoring even when it may change the behavior of the program, and rarely preview the Automated Refactorings. We also interviewed nine of our participants to provide deeper insight about the patterns that we observed in the behavioral data. We found that programmers use predictable Automated Refactorings even if they have rare bugs or change the behavior of the program. This paper reports some of the factors that affect the use of Automated Refactorings such as invocation method, awareness, naming, trust, and predictability and the major mismatches between programmers' expectations and Automated Refactorings. The results of this work contribute to producing more effective tools for refactoring complex software.

Nicholas Chen - One of the best experts on this subject based on the ideXlab platform.

  • a comparative study of manual and Automated Refactorings
    European Conference on Object-Oriented Programming, 2013
    Co-Authors: Stas Negara, Mohsen Vakilian, Nicholas Chen, Ralph E Johnson, Danny Dig
    Abstract:

    Despite the enormous success that manual and Automated refactoring has enjoyed during the last decade, we know little about the practice of refactoring. Understanding the refactoring practice is important for developers, refactoring tool builders, and researchers. Many previous approaches to study Refactorings are based on comparing code snapshots, which is imprecise, incomplete, and does not allow answering research questions that involve time or compare manual and Automated refactoring. We present the first extended empirical study that considers both manual and Automated refactoring. This study is enabled by our algorithm, which infers Refactorings from continuous changes. We implemented and applied this algorithm to the code evolution data collected from 23 developers working in their natural environment for 1,520 hours. Using a corpus of 5,371 Refactorings, we reveal several new facts about manual and Automated Refactorings. For example, more than half of the Refactorings were performed manually. The popularity of Automated and manual Refactorings differs. More than one third of the Refactorings performed by developers are clustered in time. On average, 30% of the performed Refactorings do not reach the Version Control System.

  • ECOOP - A comparative study of manual and Automated Refactorings
    ECOOP 2013 – Object-Oriented Programming, 2013
    Co-Authors: Stas Negara, Mohsen Vakilian, Nicholas Chen, Ralph E Johnson, Danny Dig
    Abstract:

    Despite the enormous success that manual and Automated refactoring has enjoyed during the last decade, we know little about the practice of refactoring. Understanding the refactoring practice is important for developers, refactoring tool builders, and researchers. Many previous approaches to study Refactorings are based on comparing code snapshots, which is imprecise, incomplete, and does not allow answering research questions that involve time or compare manual and Automated refactoring. We present the first extended empirical study that considers both manual and Automated refactoring. This study is enabled by our algorithm, which infers Refactorings from continuous changes. We implemented and applied this algorithm to the code evolution data collected from 23 developers working in their natural environment for 1,520 hours. Using a corpus of 5,371 Refactorings, we reveal several new facts about manual and Automated Refactorings. For example, more than half of the Refactorings were performed manually. The popularity of Automated and manual Refactorings differs. More than one third of the Refactorings performed by developers are clustered in time. On average, 30% of the performed Refactorings do not reach the Version Control System.

  • ECOOP - A compositional paradigm of automating Refactorings
    ECOOP 2013 – Object-Oriented Programming, 2013
    Co-Authors: Mohsen Vakilian, Nicholas Chen, Stas Negara, Roshanak Zilouchian Moghaddam, Ralph E Johnson
    Abstract:

    Recent studies suggest that programmers greatly underuse refactoring tools, especially for complex Refactorings. Complex Refactorings tend to be tedious and error-prone to perform by hand. To promote the use of refactoring tools for complex changes, we propose a new paradigm for automating Refactorings called compositional refactoring. The key idea is to perform small, predictable changes using a tool and manually compose them into complex changes. This paradigm trades off some level of automation by higher predictability and control. We show that this paradigm is natural, because our analysis of programmers' use of the Eclipse refactoring tool in the wild shows that they frequently batch and compose Automated Refactorings. We then show that programmers are receptive to this new paradigm through a survey of 100 respondents. Finally, we show that the compositional paradigm is effective through a controlled study of 13 professional programmers, comparing this paradigm to the existing wizard-based one.

  • using continuous code change analysis to understand the practice of refactoring
    2012
    Co-Authors: Stas Negara, Mohsen Vakilian, Nicholas Chen, Ralph E Johnson
    Abstract:

    Despite the enormous success that manual and Automated refactoring has enjoyed during the last decade, we know little about the practice of refactoring. Understanding the refactoring practice is important for developers, refactoring tool builders, and researchers. Many previous approaches to study Refactorings are based on comparing code snapshots, which is imprecise, incomplete, and does not allow answering research questions that involve time or compare manual and Automated refactoring. We present the first extended empirical study that considers both manual and Automated refactoring. This study is enabled by our algorithm, which infers Refactorings from continuous changes. We implemented and applied this algorithm to the code evolution data collected from 23 developers working in their natural environment for 1,520 hours. Using a corpus of 5,371 Refactorings, we reveal several new facts about manual and Automated Refactorings. For example, more than a half of the Refactorings were performed manually. The popularity of Automated and manual Refactorings differs. More than one third of the Refactorings performed by developers are clustered. For some refactoring kinds, up to 64% of performed Refactorings do not reach the Version Control System.

  • use disuse and misuse of Automated Refactorings
    International Conference on Software Engineering, 2012
    Co-Authors: Mohsen Vakilian, Nicholas Chen, Stas Negara, Balaji Ambresh Rajkumar, Brian P Bailey, Ralph E Johnson
    Abstract:

    Though refactoring tools have been available for more than a decade, research has shown that programmers underutilize such tools. However, little is known about why programmers do not take advantage of these tools. We have conducted a field study on programmers in their natural settings working on their code. As a result, we collected a set of interaction data from about 1268 hours of programming using our minimally intrusive data collectors. Our quantitative data show that programmers prefer lightweight methods of invoking Refactorings, usually perform small changes using the refactoring tool, proceed with an Automated refactoring even when it may change the behavior of the program, and rarely preview the Automated Refactorings. We also interviewed nine of our participants to provide deeper insight about the patterns that we observed in the behavioral data. We found that programmers use predictable Automated Refactorings even if they have rare bugs or change the behavior of the program. This paper reports some of the factors that affect the use of Automated Refactorings such as invocation method, awareness, naming, trust, and predictability and the major mismatches between programmers' expectations and Automated Refactorings. The results of this work contribute to producing more effective tools for refactoring complex software.

Danny Dig - One of the best experts on this subject based on the ideXlab platform.

  • a comparative study of manual and Automated Refactorings
    European Conference on Object-Oriented Programming, 2013
    Co-Authors: Stas Negara, Mohsen Vakilian, Nicholas Chen, Ralph E Johnson, Danny Dig
    Abstract:

    Despite the enormous success that manual and Automated refactoring has enjoyed during the last decade, we know little about the practice of refactoring. Understanding the refactoring practice is important for developers, refactoring tool builders, and researchers. Many previous approaches to study Refactorings are based on comparing code snapshots, which is imprecise, incomplete, and does not allow answering research questions that involve time or compare manual and Automated refactoring. We present the first extended empirical study that considers both manual and Automated refactoring. This study is enabled by our algorithm, which infers Refactorings from continuous changes. We implemented and applied this algorithm to the code evolution data collected from 23 developers working in their natural environment for 1,520 hours. Using a corpus of 5,371 Refactorings, we reveal several new facts about manual and Automated Refactorings. For example, more than half of the Refactorings were performed manually. The popularity of Automated and manual Refactorings differs. More than one third of the Refactorings performed by developers are clustered in time. On average, 30% of the performed Refactorings do not reach the Version Control System.

  • ECOOP - A comparative study of manual and Automated Refactorings
    ECOOP 2013 – Object-Oriented Programming, 2013
    Co-Authors: Stas Negara, Mohsen Vakilian, Nicholas Chen, Ralph E Johnson, Danny Dig
    Abstract:

    Despite the enormous success that manual and Automated refactoring has enjoyed during the last decade, we know little about the practice of refactoring. Understanding the refactoring practice is important for developers, refactoring tool builders, and researchers. Many previous approaches to study Refactorings are based on comparing code snapshots, which is imprecise, incomplete, and does not allow answering research questions that involve time or compare manual and Automated refactoring. We present the first extended empirical study that considers both manual and Automated refactoring. This study is enabled by our algorithm, which infers Refactorings from continuous changes. We implemented and applied this algorithm to the code evolution data collected from 23 developers working in their natural environment for 1,520 hours. Using a corpus of 5,371 Refactorings, we reveal several new facts about manual and Automated Refactorings. For example, more than half of the Refactorings were performed manually. The popularity of Automated and manual Refactorings differs. More than one third of the Refactorings performed by developers are clustered in time. On average, 30% of the performed Refactorings do not reach the Version Control System.