Sunday, November 27, 2016

Performance and the Pipeline

- How can performance analysis keep up with ever faster and more frequent release cycles in the DevOps world?

by Felix Willnecker (@Floix), fortiss GmbH, Germany, Johannes Kroß, fortiss GmbH, Germany, and André van Hoorn (@andrevanhoorn), University of Stuttgart, Germany
Associate Editor: Zhen Ming (Jack) Jiang, York University, Canada

Back in the “good old days”, a release occurred every month, quarter, or year—leaving enough time for a thorough quality analysis and extensive performance/load tests. However, these times are coming to an end or are almost over. Deploying every day, minute, or every couple of seconds becomes the new normal [1]. Agile development, test automation, consequent automation in the delivery pipeline and the DevOps movement drive this trend that conquers the IT world [2]. In this world, performance analysis is left behind. Tasks like load tests take too long and have a lot of requirements on the test and delivery environment. Therefore, performance analysis tasks are nowadays skipped and performance bugs are only detected and fixed in production. However, this is not a willful decision but an act from necessity [3]. The rest of this blog post is organized as follows: First, we outlines the three strategies on including performance analysis in your automatic delivery pipeline without slowing down your release cycles. Then we introduce the accompanied survey to find out how performance concerns are currently addressed in industrial DevOps practice. Finally, we conclude this blog post. 

Strategy # 1: Rolling back and forward

The usual response that we get when talking about performance analysis in a continuous delivery pipeline is: “Well, we just roll back if something goes wrong”. This is a great plan: in theory. In practice, this often fails in emergency situations. First of all, this strategy requires not only a continuous delivery pipeline but also an automatic rollback mechanism. This is pretty easy on the level of an application server (just install release n-1), but is getting harder with databases (e.g., legacy table views for every change), and almost impossible if multiple applications and service dependencies are involved. Instead of rolling back, rolling forward is applied. Which means, we deploy as many fixes, until the issue is resolved. Such emergency fixes are often developed in a hurry or in war room sessions. When your company introduced continuous delivery pipeline they often promised that these war room sessions come to an end, just by releasing smaller incremental artifacts. Truth is, in case of emergency Murphy’s Law applies, your rollback mechanism fails and you spend the rest of the day/night resolving this issue. 

Strategy # 2: Functional tests applied on performance

Another common strategy is using functional tests and derive some metrics that act as indicator for performance bugs. Measuring the number of exceptions or SQL statements during a functional test and comparing these numbers with a former release or baseline are common practice. Some tool support like the PerfSig utilizing Dynatrace AM exist to automate such analysis using the Jenkins build server [4]. This approach acts pro-actively, so issues can be detected before release and requires no additional tests, just some tooling and analysis software in your delivery pipeline. However, the impact on the performance of your application are vague. Resource utilization or response time measurements conducted during short functional tests usually delivery no meaningful values, especially if the delivery pipeline runs in a virtualized environment. Exceptions and SQL statements act as an indicator and may reduce the number of performance issues in production but won’t identify a poorly developed algorithm.

Strategy # 3: Model-based performance analysis 

Performance models have their origin in academia and today are only rarely adopted by practitioners. However, such models can help to identify performance bugs in your software, without adding new tests. Nowadays, performance model generators exist that derive the performance characteristics of an application directly from a build system [5]. These approaches rely on measurements on operation and component level and require a good test coverage. A complete functional test run should execute each operation multiple times so that these generators can derive resource demands per operation. Changes in the resource demands indicate a performance change either for good (decreased resource demand) or for worse (increased resource demand). The main advantage compared to simple functional test analysis, is that a complete set of tests is analyzed and multiple runs of the same test set are supported. However, major changes in the test set, may require a new baseline for a model-based analysis.


To identify and capture the current state-of-the-art of performance practices, but also present problems and issues, we have launched a survey that we would like to promote and encourage you or your organization to participate in. We would like to find out how performance concerns are currently addressed in industrial DevOps practice and plan to integrate the impressions and results in a blueprint for performance-aware DevOps. Furthermore, we would like to know whether classical paradigms still dominate in your organization, at what stages performance evaluations are conducted, what metrics are relevant for you, and what actions are applied after a performance evaluation.

Our long-term aim is to not only conduct this survey once, but to benchmark the state-of-the-art continuously, compare the results over a longer period, and to regularly incorporate outcomes to our blueprint. The results of this survey will be incorporated into our bigger project of building the reference infrastructure for performance-aware DevOps and helps to understand DevOps in industry today. 


Classical performance and load test phases may vanish and never come back. However, current strategies on reducing the risks of performance issues in production have a number of disadvantages. Rollback mechanisms might fail, functional tests only deliver indicators, and model-based evaluations lack of industrial tool support. Most of the time, performance does not receive enough or even any attention. In our opinion, this is primarily due to the fact that present performance management practices are not integrated and adapted to typical DevOps processes, especially in terms of automation and holistic tool support.



  1. J. Seiden, Amazon Deploys to Production Every 11.6 Seconds,
  2. A. Brunnert, A. van Hoorn, F. Willnecker, A. Danciu, W. Hasselbring, C. Heger, N. Herbst, P. Jamshidi, R. Jung, J. von Kistowski, A. Koziolek. Performance-oriented devops: A research agenda. arXiv preprint arXiv:1508.04752. 2015.
  3. T-Systems MMS, PerfSig-jenkins.,
  4. M. Dlugi, A. Brunnert, H. Krcmar. Model-based performance evaluations in continuous delivery pipelines. In Proceedings of the 1st International Workshop on Quality-Aware DevOps. 2015.

If you like this article, you might also enjoy reading:

    1. L. Zhu, L. Bass, G. Champlin-Scharff. DevOps and Its Practices. IEEE Software 33(3): 32-34. 2016.
    2. M. Callanan, A. Spillane. DevOps: Making It Easy to Do the Right Thing. IEEE Software 33(3): 53-59. 2016.
    3. Diomidis Spinellis. Being a DevOps Developer. IEEE Software 33(3): 4-5. 2016.

    Sunday, November 20, 2016

    Creativity in Requirements Engineering: Why and How?

    By: Tanmay Bhowmik, Mississippi State University
    Associate Editor: Mehdi Mirakhorli (@MehdiMirakhorli)

    Prologue. One day, I was giving a talk on creativity in requirements engineering (RE). A fellow colleague from the audience asked, “Why do you need to create requirements in the first place? Why don’t you just ask the customers what they want?” Well, from the traditional RE perspective, if we assume that requirements reside in stakeholders’ minds in an implicit manner [1] and the stakeholders know what they want, my colleague has a point. However, does this traditional view on the origin of requirements anymore hold in modern RE?

    Why creativity in RE? Software industry has become extremely competitive. These days, we usually choose from multiple software systems that strive to serve the users in the same application domain. Therefore, in order to sustain and grow in the market, a software system needs to distinguish itself from other similar products and consistently enchant customers with novel and useful features. As a result, the traditional view on the origin of requirements does not hold anymore. Modern requirements engineers need to create innovative requirements in order to equip the software with competitive advantage. It is no more an exaggeration to say that requirements engineers need to be trained with the concept of creativity in RE.     

    What is creativity in RE? Before looking into creativity from an RE perspective, we need to know what creativity is. Robert Sternberg has given a definition of creativity that is widely accepted by the scientific community. According to him, “creativity is the ability to produce work that is both novel (i.e., original and unexpected) and appropriate (i.e., useful and adaptive to the task constraints)” [2]. According to Neil Maiden and his colleagues [5], creativity in RE is the capture of requirements that are new to the project stakeholders but may not be historically new to humankind. For example, internet browsing using smartphone has become a very basic requirement for the smartphone domain. Even though the commercial Internet Service Providers emerged in the late 1980s, internet service on cell phones was not available until late 1990s. In 1999, when NTT DoCoMo in Japan introduced the first full internet service on mobile phones [6], it was a creative requirement to the stakeholders.

    How to obtain creative requirements? Depending on the techniques and heuristics used, Creative requirements could be obtained in three different ways [3, 7].
    • Exploratory Creativity: We can come up with creative requirements by exploring a set of possibilities in the search space guided by certain rules and constraints.
    • Combinational Creativity:  We can make unfamiliar connections between familiar possibilities or known requirements and obtain creative requirements.
    • Transformational Creativity: Creative requirements could also be obtained by changing/transforming the constraints on the search space and expanding the set of possibilities to be explored.

    Fig. 1. Different ways to obtain creative requirements (adapted from [7]).

    Figure 1 explains the three different ways of achieving creativity in RE with a simple example [8]. Assume a creativity scenario for a hypothetical software system that “should provide access control” for a classified laboratory facility. As an initial constraint, we have specific limitation on available hardware systems. Let XYZ in Figure 1 be a search space with three possibilities “log-in ID and password”, “fingerprint”, and “facial recognition”. Provided that these possibilities satisfy the initial constraint, using any of these as a means of access control is an instance of exploratory creativity. If we combine two apparently different means of providing access control, such as log-in ID and password along with fingerprint, or log-in ID and password combined with facial recognition, we have an example of combinational creativity. Now, let us relax our initial constraint on hardware limitation and expand the search space towards biometric direction. Thereby we get a new search space XYZ’ that includes “retina scan” and “DNA scan” as further possibilities. Using any of these options is an example of transformational creativity. 

    A framework to provide automated support for creativity in RE. In a recent work, we have developed a novel framework that utilizes stakeholders’ social interaction about an existing system and provides an automated support for combinational creativity. Figure 2 presents an overview of our framework [8]. In RE, combinational creativity can be achieved by making unfamiliar connections between familiar possibilities where the familiarity and unfamiliarity aspects are determined from the stakeholders’ perspective.   

     Fig. 2. A framework for combinational creativity in RE [8].

    Our framework starts with creating stakeholders’ social network based on their interaction about the software system. Then the network is clustered to find stakeholders’ groups, as discussion within a group generally revolves around ideas familiar to the group members. For each group, we apply topic modeling to interaction related documents, such as requirements and comments contributed by the group members. Thereby, we find familiar ideas discussed in a group in terms of dominant topics represented in terms of multiple topic words [8]. Such familiar ideas from all the stakeholder groups constitute a search space of familiar possibilities. In order to make an unfamiliar connection between these possibilities and to keep the number of options in the search space manageable, we develop a heuristic with two major phases.

    In the first phase, following Fillmore’s case theory [9] (it suggests that a requirement can be described in terms of a verb and a noun/object that the verb acts upon), we keep only the verbs and nouns in a dominant topic. In order to achieve unfamiliarity, we flip the part-of-speech of these words and transform the familiar verbs and a nouns into unfamiliar nouns and verbs respectively. In the second phase, we make verb-noun combinations by taking unfamiliar verbs from one group and nouns from another, and filter out verb-noun combinations showing higher textual similarity with existing requirements. To that end, we obtain the most unfamiliar idea combinations in terms of verb-noun pairs. Finally, a requirements engineer elaborates requirements by filling out some templates with the verb-noun pairs (as shown in Figure 3).    

    Fig. 3. An example of requirements elaboration [8].

    We have applied our framework on Firefox and Mylyn, two large-scale open source software systems, and created eight new requirements for Firefox and five for Mylyn. An associated human subject study further confirmed the creative merits of these requirements [8]. Although our current framework’s application is limited to existing systems, it is an initial step towards automated support for combinational creativity in RE.

    Epilogue. In our current world that can be defined by its high reliance on computing devices and software systems, competition within the software industry is ever increasing. This competition is going to be even fiercer over time. As software engineering practitioners and academicians, our priority should be to educate and train ourselves with the techniques of creativity in RE.

    Further interested in this topic? Please feel free to read our recent work presented in [8] and share your thoughts.

    1.     J. Lemos, C. Alves, L. Duboc, and G. N. Rodrigues, “A systematic mapping study on creativity in requirements engineering,” in Proceedings of the Annual ACM Symposium on Applied Computing (SAC), 2012, pp. 1083–1088.
    2.     R. J. Sternberg, Handbook of creativity. Cambridge University Press, 1999.
    3.     M. A. Boden, The creative mind: Myths and mechanisms. Routledge, 2003.
    4.     M. Suwa, J. Gero, and T. Purcell, “Unexpected discoveries and Sinvention of design requirements: Important vehicles for a design process,” Design Studies, vol. 21, no. 6, pp. 539–567, 2000.
    5.     N. Maiden, S. Jones, K. Karlsen, R. Neill, K. Zachos, and A. Milne, “Requirements engineering as creative problem solving: A research agenda for idea finding,” in Proceedings of the International Requirements Engineering Conference (RE), 2010, pp. 57–66.
    6.     J. Blagdon, “How emoji conquered the world,” The Verge, Vox Media, 2013.
    7.     N. Maiden, “Requirements engineering as information search and idea discovery (keynote),” in Proceedings of the International Requirements Engineering Conference (RE), 2013, pp. 1–1.
    8.     T. Bhowmik, N. Niu, A. Mahmoud, and J. Savolainen, “Automated Support for Combinational Creativity in Requirements Engineering,” in Proceedings of the International Requirements Engineering Conference (RE), 2014, pp. 243-252.
    9.     C. Fillmore, “The case for case,” in Universals in Linguistic Theory, E. Bach and R. Harms, Eds. New York: Holt, Rinehart and Winston, 1968, pp. 1–88.

    Sunday, November 13, 2016

    Why we refactor? Here are 44 different reasons, according to GitHub contributors

    by Danilo Silva, Universidade Federal de Minas Gerais (; Nikolaos Tsantalis, Concordia University (@NikosTsantalis); Marco Tulio Valente, Universidade Federal de Minas Gerais (@mtov)
    Associate Editor: Christoph Treude (@ctreude)

    The adoption of refactoring practices was fostered by the availability of refactoring catalogues, as the one proposed by Martin Fowler [1]. These catalogues propose a name and describe the mechanics of each refactoring, as well as demonstrate its application through code examples. They also provide a motivation for the refactoring, which is usually associated to the resolution of code smells. For example, Extract Method is recommended to decompose a large and complex method or to eliminate code duplication. However, there is a limited number of studies investigating the real motivations driving the refactoring practice. To fill this gap in the literature, we conducted an in-depth investigation on why developers refactor code.

    During 61 days, we monitored the refactoring activity of 748 GitHub Java repositories, using an automated infrastructure we built. Every time we identified a refactoring, we asked the developer who performed it to explain the reasons behind his/her decision to refactor the code. Next, we categorized their responses into different themes of motivations. The following table presents the results of this process, in the format of a catalogue with 44 distinct motivations for refactoring, grouped by 12 well-known refactoring types.

    Table 1: Motivations for each refactoring type.
    Ref. TypeMotivationOccurrences
    Extract MethodExtract a piece of reusable code from a single place and call the extracted method in multiple places.43
    Introduce an alternative signature for an existing method (e.g., with additional or different parameters) and make the original method delegate to the extracted one.25
    Extract a piece of code having a distinct functionality into a separate method to make the original method easier to understand.21
    Extract a piece of code in a new method to facilitate the implementation of a feature or bug fix, by adding extra code either in the extracted method, or in the original method.15
    Extract a piece of duplicated code from multiple places, and replace the duplicated code instances with calls to the extracted method.14
    Introduce a new method that replaces an existing one to improve its name or remove unused parameters. The original method is preserved for backward compatibility, it is marked as deprecated, and delegates to the extracted one.6
    Extract a piece of code in a separate method to enable its unit testing in isolation from the rest of the original method.6
    Extract a piece of code in a separate method to enable subclasses override the extracted behavior with more specialized behavior.4
    Extract a piece of code to make it a recursive method.2
    Extract a constructor call (class instance creation) into a separate method.1
    Extract a piece of code in a separate method to make it execute in a thread.1
    Move ClassMove a class to a package that is more functionally or conceptually relevant.13
    Move a group of related classes to a new subpackage.7
    Convert an inner class to a top-level class to broaden its scope.4
    Move an inner class out of a class that is marked deprecated or is being removed.3
    Move a class from a package that contains external API to an internal package, avoiding its unnecessary public exposure.2
    Convert a top-level class to an inner class to narrow its scope.2
    Move a class to another package to eliminate undesired dependencies between modules.1
    Eliminate a redundant nesting level in the package structure.1
    Move a class back to its original package to maintain backward compatibility.1
    Move AttributeMove an attribute to a class that is more functionally or conceptually relevant.15
    Move similar attributes to another class where a single copy of them can be shared, eliminating the duplication.4
    Rename PackageRename a package to better represent its purpose.8
    Rename a package to conform to project's naming conventions.3
    Move a package to a parent package that is more functionally or conceptually relevant.2
    Move MethodMove a method to a class that is more functionally or conceptually relevant.8
    Move a method to a class that permits its reuse by other classes.3
    Move a method to eliminate dependencies between classes.3
    Move similar methods to another class where a single copy of them can be shared, eliminating duplication.1
    Move a method to permit subclasses to override it.1
    Inline MethodInline and eliminate a method that is unnecessary or has become too trivial after code changes.13
    Inline and eliminate a method because its caller method has become too trivial after code changes, so that it can absorb the logic of the inlined method without compromising readability.2
    Inline a method because it is easier to understand the code without the method invocation.1
    Extract SuperclassIntroduce a new superclass that contains common state or behavior from its subclasses.7
    Introduce a new superclass that is decoupled from specific dependencies of a subclass.1
    Extract a superclass from a class that holds many responsibilities.1
    Pull Up MethodMove common methods to superclass.8
    Pull Up AttributeMove common attributes to superclass.7
    Extract InterfaceIntroduce an interface to enable different behavior.1
    Introduce an interface to facilitate the use of a dependency injection framework.1
    Introduce an interface to avoid depending on an existing class/interface.1
    Push Down AttributePush down an attribute to allow specialization by subclasses.2
    Push down attribute to subclass so that the superclass does not depend on a specific type.1
    Push Down MethodPush down a method to allow specialization by subclasses.1

    Our findings confirm that Extract Method is the "Swiss army knife of refactorings". It is the refactoring with the most motivations (11 in total), and the majority of them expresses an intention to facilitate or even enable the completion of the maintenance task the developer is working on. In contrast, only two motivations for Extract Method (decompose method to improve readability and remove duplication) are targeting code smells.

    The other refactorings are performed to improve the system design. For example, the most common motivation for Move Class, Move Attribute, and Move Method is to reorganize code elements, so that they have a stronger functional or conceptual relevance, or to eliminate dependencies between code elements.

    By comparing to the code symptoms that initiate refactoring reported in the study by Kim et al. [2], we found the readability, reuse, testability, duplication, and dependency concerns in common.

    Automated Refactoring Support

    We also asked developers whether they used the automated refactoring support of an IDE to perform refactorings. Thus, we could compare our finding with previous studies in this area, leading to the following conclusions.

    • Manual refactoring is still prevalent (55% of the developers refactored the code manually). Inheritance related refactoring tool support seems to be the most under-used (only 10% done automatically), while Move Class and Rename Package are the most trusted refactorings (over 50% done automatically). The prevalence of manually applied refactoring confirms the findings of Murphy-Hill et al. [3] and Negara et al. [4]. However, it seems that developers apply more automated refactorings nowadays.
    • The IDE plays an important role in the adoption of refactoring tool support. IntelliJ IDEA users perform more automated refactorings (71% done automatically) than Eclipse users (44%) and Netbeans users (50%).
    29 developers also explained why they did not use a refactoring tool, as summarized in the following table.

    Table 2: Reasons for not using refactoring tools.
    The developer does not trust automated support for complex refactorings.10
    Automated refactoring is unnecessary, because the refactoring is trivial and can be manually applied.8
    The required modification is not supported by the IDE.6
    The developer is not familiar with the refactoring capabilities of his/her IDE.3
    The developer did not realize at the moment of the refactoring that he/she could have used refactoring tools.2

    If you are interested in our study, please refer to our paper accepted at FSE 2016:

    Danilo Silva, Nikolaos Tsantalis, Marco Tulio Valente. Why We Refactor? Confessions of GitHub Contributors. [pdf]


    [1] M. Fowler. Refactoring: Improving the Design of Existing Code. Addison-Wesley, Boston, MA, USA, 1999.
    [2] M. Kim, T. Zimmermann, and N. Nagappan. An empirical study of refactoring challenges and benefits at Microsoft. IEEE Trans. Softw. Eng., 40(7), July 2014.
    [3] E. R. Murphy-Hill, C. Parnin, and A. P. Black. How we refactor, and how we know it. IEEE Trans. Softw. Eng., 38(1):5-18, 2012.
    [4] S. Negara, N. Chen, M. Vakilian, R. E. Johnson, and D. Dig. A comparative study of manual and automated refactorings. In Proceedings of the 27th European Conference on Object-Oriented Programming (ECOOP), pages 552-576, 2013.

    Sunday, November 6, 2016

    When and Why Your Code Starts to Smell Bad

    By: Michele Tufano, College of William and Mary, USA (@tufanomichele)
    Associate Editor: Sonia Haiduc, Florida State University, USA (@soniahaiduc

    Have you ever tried to modify a large class with too many methods? And what about those unnecessarily complicated multi-level nested loops? How did you feel about that? Those are code smells, and can make the evolution of your system a nightmare.

    More formally, code smells are the clues that indicate violations of fundamental design principles and negatively impact design quality [1]. Several studies demonstrated the negative impact of code smells on change- and fault-proneness [2], software understandability [3] and maintainability [4] [5].

    The question is: when and why those code smells are introduced? Common wisdom suggests that they are introduced during maintenance and evolution activities on software artifacts. However, such a conjecture has never been empirically verified. In this work, we empirically answer such questions by analyzing the complete change history of 200 Java software systems, belonging to three ecosystems - Apache, Android and Eclipse. We considered five types of code smells: Blob, Complex Class, Class Data Should Be Private, Functional Decomposition, and Spaghetti Code [1].


    When? - To answer this question we checked out every single commit of the analyzed systems and ran a code smell detector (i.e., DECOR [5]) on the Java classes introduced/modified in the commit. We also computed the value of quality metrics on such classes in order to obtain evolutionary metric trends. These steps allowed us to (i) understand after how many modifications on a software artifact code smells are usually introduced, and (ii) compare the metric trends of clean and smelly software artifacts, looking for significant differences in how fast their metrics’ values increase or decrease.

    Curiosity - How much did it take? Eight weeks on a Linux server with
    7 quad-core 2.67 GHz CPU (28 cores) and 24 Gb of RAM.

    Why? - Here we wanted to understand why developers introduce code smells. In particular, does their workload influence the probability of introducing a code smell? What about the deadline pressure for releases?  Which are the tasks (implementation of new features, bug fixing, refactoring, etc.) that developers perform when introducing code smells? To this aim, we tagged the commits that introduced the smells. To perform such an analysis, we needed to identify those commits responsible for the introduction of a code smell. When the code smell is introduced during the creation of the software artifact, trivially, we just analyzed the first commit, but what about code smell instances that appear after several commits? Which commits should we analyze? If we analyze only the one in which the code smell is identified, we would discard all the change history that led to a smelly artifact! For this reason we defined smell-introducing commits as commits which might have pushed a software artifact toward a smelly direction, looking at discriminating metrics’ trends. For example, in the following figure, commits c3, c5 and c7 are identified as smell-introducing commits and tagged as such.


    When? - While common wisdom suggests that smells are introduced after several activities made on a code component, we found instead that such a component is generally affected by a smell since its creation. Thus, developers introduce smells when they work on a code component for the very first time.

    However, there are also cases where the smells manifest themselves after several changes were performed on the code component. In these cases, files that will become smelly exhibit specific trends for some quality metric values that are significantly different than those of clean (non-smelly) files.

    For example, the Weighted Method Complexity (WMC) of classes that eventually become Blobs, increases more than 230 times faster with respect to clean classes, considering the same initial development time.

    Why? - Smells are generally introduced by developers when enhancing existing features or implementing new ones. As expected, smells are generally introduced in the last month before a deadline, while there is a considerable number of instances introduced in the first year from the project startup. Finally, developers that introduce smells are generally the owners of the file (i.e., they are responsible for at least 75% of the changes made to the file) and they are more prone to introducing smells when they have higher workloads.


    [1] M. Fowler, K. Beck, J. Brant, W. Opdyke, and D. Roberts, Refactoring: Improving the Design of Existing Code. Addison-Wesley, 1999.
    [2] F. Khomh, M. Di Penta, Y.-G. Gueheneuc, and G. Antoniol, “An exploratory study of the impact of antipatterns on class change- and fault-proneness,” Empirical Software Engineering, vol. 17, no. 3, pp. 243–275, 2012.
    [3] M.Abbes, F.Khomh, Y.-G. Gueheneuc, and G.Antoniol, “An empirical study of the impact of two antipatterns, Blob and Spaghetti Code, on program comprehension,” in 15th European Conference on Software Maintenance and Reengineering, CSMR 2011, 1-4 March 2011, Oldenburg, Germany. IEEE Computer Society, 2011, pp. 181–190.
    [4] D. I. K. Sjøberg, A. F. Yamashita, B. C. D. Anda, A. Mockus, and T. Dyba, “Quantifying the effect of code smells on maintenance effort,” IEEE Trans. Software Eng., vol. 39, no. 8, pp. 1144–1156, 2013.
    [5] Michele Tufano, Fabio Palomba, Gabriele Bavota, Rocco Oliveto, Massimiliano Di Penta, Andrea De Lucia, and Denys Poshyvanyk. 2015. When and why your code starts to smell bad. In Proceedings of the 37th International Conference on Software Engineering - Volume 1 (ICSE '15), Vol. 1. IEEE Press, Piscataway, NJ, USA, 403-414. Preprint available at:
    [6] N.Moha, Y.-G. Gueheneuc, L. Duchien, and A.-F.L. Meur, “DECOR: A method for the specification and detection of code and design smells,” IEEE Transactions on Software Engineering, vol. 36, pp. 20–36, 2010.