by Johannes Bräuer, Reinhold Plösch, Johannes Kepler University Linz, and Matthias Saft, Christian Körner, Corporate Technology Siemens AG
Associate Editor: Christoph Treude (@ctreude)
In the past, software metrics were used to express the compliance of source code with object-oriented design aspects [1], [2]. Nevertheless, it has been found out that metrics are too vague for dealing with the complexity of driving concrete design improvements [3] and the idea of identifying code or design smells in source code has been established [4].
Despite good progress in localising design flaws based on the identification of design smells, these design smells are still too fine-grained to conclude a design assessment. Consequently, we follow the idea of measuring and assessing the compliance of the source code with object-oriented design principles [5]. For doing so, we systematically collected design principles that are applied in practice and then jointly derived more tangible design best practices [6]. These practices have the key advantage of being specific enough (1) to be applied by practitioners and (2) to be identified by an automatic tool. As a result, we developed the static code analysis tool MUSE that currently contains a set of 67 design best practices (design rules) for the programming languages Java, C# and C++ [7].
Design best practices naturally have a different importance. To find out about a proper importance, we decided to conduct a survey to gather data that allow a more differentiated view of the importance of Java-related design best practices (i.e., a subset of 49 instances).
Survey on the Importance of Design Best Practices
The survey was available from 26th October until 21st November 2016. 214 software professionals (software engineers, architects, consultants, etc.) completed the survey, resulting in an average of 134 opinions for each design best practice. Based on this data we derive a default importance, as depicted in Table 1. For the sake of clarification, the arrows indicate design best practices that are close to the next higher (↑) or lower (↓) importance level. Furthermore, we calculated a range based on the standard deviation that allows an increase or decrease of the importance within these borders. This data can be used as basis to assess quality and to plan quality improvements.
Beyond the Result of the Importance Assessment
Based on the survey result, we expanded our research in two directions. Accordingly, we further examined our idea of operationalizing design principles, and we recently proposed a design debt prioritization approach to guide design improvement activities properly.
While the survey findings revealed evidence of the importance of design best practices, the remaining question was still whether the practices, assigned to a specific design principle, cover essential aspects of that principle or just touch on some minor design concerns. To answer this general question and to identify white-spots in operationalizing certain principles, we conducted a focus group research for 10 selected principles with 31 software design experts in six focus groups [8]. The result of this investigation showed that our design best practices are capable to measure and to assess the major aspects of the examined design principles.
In the course of the focus group discussions and in communicating the survey result to practitioners, we identified the need to prioritize design best practice violations not only from the viewpoint of their importance, but also from the viewpoint of a quality state. As a result, we proposed a portfolio-based assessment approach that combines the importance of each design best practice (y-axis in Figure 1) with a quality index (x-axis in Figure 1) derived from a benchmark suite [9], [10]. This combination is presented as portfolio matrix, as depicted in Figure 1 for the measurement result of a particular open-source project; in total, the 49 design best practices for Java are presented. Taking care of all 49 best practices is time expensive and could be overwhelming for the project team. Consequently, the portfolio-based assessment approach groups the design best practices into four so-called investment areas, which recommend concrete improvement strategies.
Concluding Remarks
To summarize this blog entry and to answer the heading question, let’s reconsider the opinions of the 214 survey participants. Accordingly, we derived the importance of the 49 design best practices, from which five instances are judged to be of very high importance. In fact, code duplicates (code clones), supertypes using subtypes, package cycles, commands in query methods and public fields are the design concerns considered to be very important. In other words, avoiding the violation of these design rules in practice can enhance and foster the flexibility, reusability and maintainability of a software product.
For more details about the conducted survey, we refer interested readers to the research article titled “A Survey on the Importance of Object-oriented Design Best Practices” [11].
References
[1] S. R. Chidamber and C. F. Kemerer, “A metrics suite for object oriented design,” IEEE Trans. Softw. Eng., vol. 20, no. 6, pp. 476–493, Jun. 1994.
[2] J. Bansiya and C. G. Davis, “A hierarchical model for object-oriented design quality assessment,” IEEE Trans. Softw. Eng., vol. 28, no. 1, pp. 4–17, Jan. 2002.
[3] R. Marinescu, “Measurement and quality in object-oriented design,” in Proceedings of the 21st IEEE International Conference on Software Maintenance (ICSM), Budapest, Hungary, 2005, pp. 701–704.
[4] R. Marinescu, “Detection strategies: metrics-based rules for detecting design flaws,” in Proceedings of the 20th IEEE International Conference on Software Maintenance, Chicago, IL, USA, 2004, pp. 350–359.
[5] J. Bräuer, “Measuring Object-Oriented Design Principles,” in Proceedings of the 30th IEEE/ACM International Conference on Automated Software Engineering (ASE), Lincoln, NE, USA, 2015, pp. 882–885.
[6] R. Plösch, J. Bräuer, C. Körner, and M. Saft, “Measuring, Assessing and Improving Software Quality based on Object-Oriented Design Principles,” Open Comput. Sci., vol. 6, no. 1, 2016.
[7] R. Plösch, J. Bräuer, C. Körner, and M. Saft, “MUSE - Framework for Measuring Object-Oriented Design,” J. Object Technol., vol. 15, no. 4, p. 2:1-29, Aug. 2016.
[8] J. Bräuer, R. Plösch, M. Saft, and C. Körner, “Measuring Object-Oriented Design Principles: The Results of Focus Group-Based Research,” J. Syst. Softw., 2018.
[9] J. Bräuer, M. Saft, R. Plösch, and C. Körner, “Improving Object-oriented Design Quality: A Portfolio- and Measurement-based Approach,” in Proceedings of the 27th International Workshop on Software Measurement and 12th International Conference on Software Process and Product Measurement (IWSM-Mensura), Gothenburg, Sweden, 2017, pp. 244–254.
[10] J. Bräuer, R. Plösch, M. Saft, and C. Körner, “Design Debt Prioritization - A Design Best Practice-Based Approach,” in Proceedings of the 1st International Conference on Technical Debt (TechDebt), 2018.
[11] J. Bräuer, R. Plösch, M. Saft, and C. Körner, “A Survey on the Importance of Object-Oriented Design Best Practices,” in Proceedings of the 43rd Euromicro Conference on Software Engineering and Advanced Applications (SEAA), Vienna, Austria, 2017, pp. 27–34.
Associate Editor: Christoph Treude (@ctreude)
In the past, software metrics were used to express the compliance of source code with object-oriented design aspects [1], [2]. Nevertheless, it has been found out that metrics are too vague for dealing with the complexity of driving concrete design improvements [3] and the idea of identifying code or design smells in source code has been established [4].
Despite good progress in localising design flaws based on the identification of design smells, these design smells are still too fine-grained to conclude a design assessment. Consequently, we follow the idea of measuring and assessing the compliance of the source code with object-oriented design principles [5]. For doing so, we systematically collected design principles that are applied in practice and then jointly derived more tangible design best practices [6]. These practices have the key advantage of being specific enough (1) to be applied by practitioners and (2) to be identified by an automatic tool. As a result, we developed the static code analysis tool MUSE that currently contains a set of 67 design best practices (design rules) for the programming languages Java, C# and C++ [7].
Design best practices naturally have a different importance. To find out about a proper importance, we decided to conduct a survey to gather data that allow a more differentiated view of the importance of Java-related design best practices (i.e., a subset of 49 instances).
Survey on the Importance of Design Best Practices
The survey was available from 26th October until 21st November 2016. 214 software professionals (software engineers, architects, consultants, etc.) completed the survey, resulting in an average of 134 opinions for each design best practice. Based on this data we derive a default importance, as depicted in Table 1. For the sake of clarification, the arrows indicate design best practices that are close to the next higher (↑) or lower (↓) importance level. Furthermore, we calculated a range based on the standard deviation that allows an increase or decrease of the importance within these borders. This data can be used as basis to assess quality and to plan quality improvements.
Table 1. Design best practices ordered by importance |
Default Importance | Importance Range | |
---|---|---|
AvoidDuplicates |
very high
|
very high
|
AvoidUsingSubtypesInSupertypes |
very high
|
high-very high
|
AvoidPackageCycles |
very high
|
high-very high
|
AvoidCommandsInQueryMethods |
very high
|
high-very high
|
AvoidPublicFields |
very high
|
high-very high
|
DocumentInterfaces |
high ↑
|
moderate-very high
|
AvoidLongParameterLists |
high ↑
|
high-very high
|
UseInterfacesIfPossible |
high ↑
|
moderate-very high
|
AvoidStronglyCoupledPackages |
high
|
moderate-very high
|
AvoidNonCohesiveImplementations |
high
|
moderate-very high
|
AvoidUnusedClasses |
high
|
moderate-very high
|
DontReturnUninvolvedData |
high
|
moderate-very high
|
AvoidNonCohesivePackages |
high
|
moderate-very high
|
DocumentPublicMethods |
high
|
moderate-very high
|
UseCompositionNotInheritance |
high
|
moderate-very high
|
DocumentPublicClasses |
high
|
moderate-very high
|
AvoidPublicStaticFields |
high
|
moderate-very high
|
AvoidDiamondInheritanceStructuresInterfaces |
high
|
moderate-very high
|
AvoidLongMethods |
high
|
moderate-very high
|
AvoidSimilarNamesForDifferentDesignElements |
high
|
moderate-very high
|
AvoidUnusedAbstractions |
high
|
moderate-very high
|
CheckUnsuitableFunctionality |
high
|
moderate-very high
|
AvoidSimilarAbstractions |
high
|
moderate-very high
|
DocumentPackages |
high ↓
|
low-very high
|
UseInterfacesAsReturnType |
high ↓
|
low-very high
|
AvoidUncheckedParametersOfSetters |
high ↓
|
high-very high
|
AvoidSimilarNamesForSameDesignElements |
moderate ↑
|
moderate-high
|
CheckObjectInstantiationsByName |
moderate ↑
|
low-high
|
AvoidRepetitionOfPackageNamesOnAPath |
moderate ↑
|
low-high
|
ProvideInterfaceForClass |
moderate
|
low-high
|
AvoidRuntimeTypeIdentification |
moderate
|
low-high
|
AvoidDirectObjectInstantiations |
moderate
|
low-high
|
CheckUnusedSupertypes |
moderate
|
low-high
|
AbstractPackagesShouldNotDependOnOtherPkg |
moderate
|
low-high
|
DontReturnMutableCollectionsOrArrays |
moderate
|
low-high
|
AvoidMassiveCommentsInCode |
moderate
|
low-high
|
AvoidReturningDataFromCommands |
moderate
|
low-high
|
UseAbstractions |
moderate ↓
|
very low-high
|
CheckUsageOfNonFullyQualifiedPackageNames |
low
|
low-moderate
|
AvoidManySetter |
low
|
very low-moderate
|
AvoidHighNumberOfSubpackages |
low
|
very low-moderate
|
AvoidConcretePackagesNotUsedFromOtherPkg |
low
|
very low-moderate
|
AvoidSettersForHeavilyUsedFields |
low
|
very low-moderate
|
AvoidAbstractClassesWithOneExtension |
low
|
very low-moderate
|
DontInstantiateImplementationsInClients |
low
|
very low-moderate
|
AvoidManyGetters |
low
|
very low-moderate
|
AvoidProtectedFields |
low
|
very low-moderate
|
CheckDegradedPackageStructure |
low
|
very low-moderate
|
AvoidManyTinyMethods |
low
|
very low-moderate
|
Beyond the Result of the Importance Assessment
Based on the survey result, we expanded our research in two directions. Accordingly, we further examined our idea of operationalizing design principles, and we recently proposed a design debt prioritization approach to guide design improvement activities properly.
While the survey findings revealed evidence of the importance of design best practices, the remaining question was still whether the practices, assigned to a specific design principle, cover essential aspects of that principle or just touch on some minor design concerns. To answer this general question and to identify white-spots in operationalizing certain principles, we conducted a focus group research for 10 selected principles with 31 software design experts in six focus groups [8]. The result of this investigation showed that our design best practices are capable to measure and to assess the major aspects of the examined design principles.
In the course of the focus group discussions and in communicating the survey result to practitioners, we identified the need to prioritize design best practice violations not only from the viewpoint of their importance, but also from the viewpoint of a quality state. As a result, we proposed a portfolio-based assessment approach that combines the importance of each design best practice (y-axis in Figure 1) with a quality index (x-axis in Figure 1) derived from a benchmark suite [9], [10]. This combination is presented as portfolio matrix, as depicted in Figure 1 for the measurement result of a particular open-source project; in total, the 49 design best practices for Java are presented. Taking care of all 49 best practices is time expensive and could be overwhelming for the project team. Consequently, the portfolio-based assessment approach groups the design best practices into four so-called investment areas, which recommend concrete improvement strategies.
Figure 1: Investment areas of portfolio matrix |
To summarize this blog entry and to answer the heading question, let’s reconsider the opinions of the 214 survey participants. Accordingly, we derived the importance of the 49 design best practices, from which five instances are judged to be of very high importance. In fact, code duplicates (code clones), supertypes using subtypes, package cycles, commands in query methods and public fields are the design concerns considered to be very important. In other words, avoiding the violation of these design rules in practice can enhance and foster the flexibility, reusability and maintainability of a software product.
For more details about the conducted survey, we refer interested readers to the research article titled “A Survey on the Importance of Object-oriented Design Best Practices” [11].
References
[1] S. R. Chidamber and C. F. Kemerer, “A metrics suite for object oriented design,” IEEE Trans. Softw. Eng., vol. 20, no. 6, pp. 476–493, Jun. 1994.
[2] J. Bansiya and C. G. Davis, “A hierarchical model for object-oriented design quality assessment,” IEEE Trans. Softw. Eng., vol. 28, no. 1, pp. 4–17, Jan. 2002.
[3] R. Marinescu, “Measurement and quality in object-oriented design,” in Proceedings of the 21st IEEE International Conference on Software Maintenance (ICSM), Budapest, Hungary, 2005, pp. 701–704.
[4] R. Marinescu, “Detection strategies: metrics-based rules for detecting design flaws,” in Proceedings of the 20th IEEE International Conference on Software Maintenance, Chicago, IL, USA, 2004, pp. 350–359.
[5] J. Bräuer, “Measuring Object-Oriented Design Principles,” in Proceedings of the 30th IEEE/ACM International Conference on Automated Software Engineering (ASE), Lincoln, NE, USA, 2015, pp. 882–885.
[6] R. Plösch, J. Bräuer, C. Körner, and M. Saft, “Measuring, Assessing and Improving Software Quality based on Object-Oriented Design Principles,” Open Comput. Sci., vol. 6, no. 1, 2016.
[7] R. Plösch, J. Bräuer, C. Körner, and M. Saft, “MUSE - Framework for Measuring Object-Oriented Design,” J. Object Technol., vol. 15, no. 4, p. 2:1-29, Aug. 2016.
[8] J. Bräuer, R. Plösch, M. Saft, and C. Körner, “Measuring Object-Oriented Design Principles: The Results of Focus Group-Based Research,” J. Syst. Softw., 2018.
[9] J. Bräuer, M. Saft, R. Plösch, and C. Körner, “Improving Object-oriented Design Quality: A Portfolio- and Measurement-based Approach,” in Proceedings of the 27th International Workshop on Software Measurement and 12th International Conference on Software Process and Product Measurement (IWSM-Mensura), Gothenburg, Sweden, 2017, pp. 244–254.
[10] J. Bräuer, R. Plösch, M. Saft, and C. Körner, “Design Debt Prioritization - A Design Best Practice-Based Approach,” in Proceedings of the 1st International Conference on Technical Debt (TechDebt), 2018.
[11] J. Bräuer, R. Plösch, M. Saft, and C. Körner, “A Survey on the Importance of Object-Oriented Design Best Practices,” in Proceedings of the 43rd Euromicro Conference on Software Engineering and Advanced Applications (SEAA), Vienna, Austria, 2017, pp. 27–34.
No comments:
Post a Comment