Sunday, September 11, 2016

Should we care about context-specific code smells?

by Maurício Aniche, Delft University of Technology (@mauricioaniche)
Associate Editor: Christoph Treude (@ctreude)

Detecting code smells that the software development team actually care about can be challenging. I remember the last software development team I was part of. We were responsible for developing a medium-sized web application. Code quality was, as in many other real-world systems, ok-ish: some components were very well-written; others were a nightmare.

That team used to get together often and discuss how to improve the quality of the system's source code. I wanted to talk about God Classes, Feature Envys, and Brain Methods, and all the new strategies to detect them. To my surprise, they never wanted to talk about it: "Yes, this is problematic, but take a look at this Controller and count the number of different endpoints it has... That's more important!", or "Yes, I understand that, but take a look at this Repository... It has lots of complicated SQL. We need to refactor it first".

I realized something that we all take for granted, but sometimes forget: context matters. My team wanted to talk about code smells that were specific to their context: a Java-based web application that uses Spring MVC as MVC framework and Hibernate as a persistence framework. We decided to investigate these smells in more detail. Similar research has been conducted by other researchers already, such as in the usage of object-relational mapping frameworks [1], Android apps [2,3], or Cascading Style Sheets (CSS) [4].

By means of different sets of interviews and surveys with many software developers that were experienced in this kind of architecture, we catalogued 6 new smells. However, these smells do not make sense in any kind of system; they only apply to systems which use Spring MVC. They are specific to the context!

As examples, we can cite Brain Repository, which happens when a repository contains too much logic, both in terms of SQL or code, or Promiscuous Controller, which happens when a Controller offers too many actions/endpoints.

We have shown [5] that these smells are indeed bad for their systems. Classes affected by these smells are more change-prone than clean classes. As part of our study, we performed an experiment with 17 developers that didn't know anything about these smells. They perceived smelly classes as problematic classes. Curiously, although with no statistical significance, they evaluated "context smells" as more problematic than "traditional smells".

We explored the problem further and investigated how code metric assessment would behave for specific architectures. We found that, for specific roles in the system, traditional code metric assessment based on thresholds can lead to unexpected results. As an example, in MVC applications, Controllers have significantly higher coupling metrics than other classes. Thus, if we use the same threshold to find problematic classes for Controllers as for other classes, we end up with lots of false positives.

Towards this goal, we propose SATT [11]. The approach

  1. analyzes the distribution of code metric values for each architectural role in a software system,
  2. verifies whether the distribution is significantly higher or lower than other classes, and
  3. provides a specific threshold for that architectural role in that software architecture.

Our approach seems to provide better assessments for architectural roles which are significantly different from other classes.

If you use Spring MVC, you can make use of all of this with our open source Springlint tool:

With that being said, we see a couple of tasks that practitioners can do in order to better spot code smells:

  • Understand their system's architecture and what the specific smells are. For example, if you are developing a mobile application, your code might suffer from smells that would not occur in a web service.
  • Share this knowledge with the rest of your team. In another study, we found that developers do not have a common perception about their system's architecture [6].
  • Look for simple detection strategies and implement them. A computer can spot these classes much faster than you!
  • Monitor and safely refactor the smelly classes.

We are not arguing that God Classes or any other traditional smells are not useful. They are, and researchers have shown their negative impact on source code before [7,8,9,10]. But your application may smell differently than these ones, and the smell may not be good.


[1] Tse-Hsun Chen, Weiyi Shang, Zhen Ming Jiang, Ahmed E. Hassan, Mohamed Nasser, and Parminder Flora. 2014. Detecting performance anti-patterns for applications developed using object-relational mapping. In Proceedings of the 36th International Conference on Software Engineering (ICSE 2014). ACM, New York, NY, USA, 1001-1012.
[2] Daniël Verloop. 2013. Code Smells in the Mobile Applications Domain. Master thesis, Delft University of Technology.
[3] Geoffrey Hecht, Romain Rouvoy, Naouel Moha, and Laurence Duchien. 2015. Detecting antipatterns in Android apps. In Proceedings of the Second ACM International Conference on Mobile Software Engineering and Systems (MOBILESoft '15). IEEE Press, Piscataway, NJ, USA, 148-149.
[4] Davood Mazinanian and Nikolaos Tsantalis. 2016. An empirical study on the use of CSS preprocessors. In Proceedings of the 23rd International Conference on Software Analysis, Evolution, and Reengineering (SANER '16). IEEE Computer Society, Washington, DC, USA, 168-178.
[5] Maurício Aniche, Gabriele Bavota, Christoph Treude, Arie van Deursen, Marco Gerosa. 2016. A Validated Set of Smells in Model-View-Controller Architecture. In Proceedings of the 32nd International Conference on Software Maintenance and Evolution (ICSME '16), IEEE Computer Society, Washington, DC, USA, to appear.
[6] Maurício Aniche, Christoph Treude, Marco Gerosa. 2016. Developers' Perceptions on Object-Oriented Design and System Architecture. In Proceedings of the 30th Brazilian Symposium on Software Engineering (SBES 2016). To appear.
[7] Fabio Palomba, Gabriele Bavota, Massimiliano Di Penta, Rocco Oliveto, and Andrea De Lucia. 2014. Do They Really Smell Bad? A Study on Developers' Perception of Bad Code Smells. In Proceedings of the 2014 IEEE International Conference on Software Maintenance and Evolution (ICSME '14). IEEE Computer Society, Washington, DC, USA, 101-110.
[8] Foutse Khomh, Massimiliano Di Penta, and Yann-Gael Gueheneuc. 2009. An Exploratory Study of the Impact of Code Smells on Software Change-proneness. In Proceedings of the 16th Working Conference on Reverse Engineering (WCRE '09). IEEE Computer Society, Washington, DC, USA, 75-84.
[9] Marwen Abbes, Foutse Khomh, Yann-Gael Gueheneuc, and Giuliano Antoniol. 2011. An Empirical Study of the Impact of Two Antipatterns, Blob and Spaghetti Code, on Program Comprehension. In Proceedings of the 15th European Conference on Software Maintenance and Reengineering (CSMR '11). IEEE Computer Society, Washington, DC, USA, 181-190.
[10] Foutse Khomh, Massimiliano Di Penta, Yann-Gaël Guéhéneuc, and Giuliano Antoniol. 2012. An exploratory study of the impact of antipatterns on class change- and fault-proneness. Empirical Software Engineering 17, 3 (June 2012), 243-275.
[11] Maurício Aniche, Christoph Treude, Andy Zaidman, Arie van Deursen, and Marco Gerosa. 2016. SATT: Tailoring Code Metric Thresholds for Different Software Architectures. In Proceedings of the 16th International Working Conference on Source Code Analysis and Manipulation (SCAM '16), IEEE Computer Society, Washington, DC, USA, to appear.

1 comment: