Monday, July 17, 2017

Performance testing in Java-based open source projects

by Cor-Paul Bezemer (@corpaul), Queen's University, Canada
Associate Editor: Zhen Ming (Jack) Jiang, York University, Canada


From a functional perspective, the quality of open source software (OSS) is on par with comparable closed-source software [1]. However, in terms of nonfunctional attributes, such as reliability, scalability, or performance, the quality is less well-understood. For example, Heger et al. [2] stated that performance bugs in OSS go undiscovered for a longer time than functional bugs, and fixing them takes longer.

As many OSS libraries (such as apache/log4j) are used almost ubiquitously across a large span of other OSS or industrial applications, a performance bug in such a library can lead to widespread slowdowns. Hence, it is of utmost importance that the performance of OSS is well-tested.

We studied 111 Java-based open source projects from GitHub to explore to what extent and how OSS developers conduct performance tests. First, we searched for projects that included at least one of the keywords 'bench' or 'perf' in the 'src/test' directory. Second, we manually identified the performance and functional tests inside that project. Third, we identified performance-sensitive projects, which mentioned in the description of their GitHub repository that they are the 'fastest', 'most efficient', etc. For a more thorough description of our data collection process, see our ICPE 2017 paper [3]. In the remainder of this blog post, the most significant findings of our study are highlighted.

Finding # 1 - Performance tests are maintained by a single developer or a small group of developers. 
In 50% of the projects, all performance test developers are one or two core developers of the project. In addition, only 44% of the test developers worked on the performance tests as well.

Finding # 2 - Compared to the functional tests, performance tests are small in most projects. 
The median SLOC (source lines of code) in performance tests in the studied projects was 246, while the median SLOC of functional tests was 3980. Interestingly, performance-sensitive projects do not seem to have more or larger performance tests than non-performance-sensitive projects.

Finding # 3 - There is no standard for the organization of performance tests. 
In 52% of the projects, the performance tests are scattered throughout the functional test suite. In 9% of the projects, code comments are used to communicate how a performance test should be executed. For example, the RangeCheckMicroBenchmark.java file from the nbronson/snaptree project contains the following comment:
/*
* This is not a regression test, but a micro-benchmark.
*
* I have run this as follows:
*
* repeat 5 for f in -client -server;
* do mergeBench dolphin . jr -dsa\
*       -da f RangeCheckMicroBenchmark.java;
* done
*/
public class RangeCheckMicroBenchmark {
...
}

In four projects, we even observed that code comments were used to communicate the results of a previous performance test run.

Finding # 4 - Most projects have performance smoke tests. 
We identified the following five types of performance tests in the studied projects:
  1. Performance smoke tests: These tests (50% of the projects) typically measure the end-to-end execution time of important functionality of the project.
  2. Microbenchmarks: 32% of the projects use microbenchmarks, which can be considered performance unit tests. Stefan et al. [4] studied microbenchmarks in depth in their ICPE 2017 paper.
  3. One-shot performance tests: These tests (15% of the projects) were meant to be executed once, e.g., to test the fix for a performance bug.
  4. Performance assertions: 5% of the projects try to integrate performance tests in the unit-testing framework, which results in performance assertions. For example, the TransformerTest.java file from the anthonyu/Kept-Collections project asserts that one bytecode serialization method is at least four times as fast as the alternative.
  5. Implicit performance tests: 5% of the projects do not have performance tests, but simply yield a performance metric (e.g., the execution time of the unit test suite). 
The different types of tests show that there is a need for performance tests at different levels, ranging from low-level microbenchmarks to higher-level smoke tests.

Finding # 5 - Dedicated performance test frameworks are rarely used. 
Only 16% of the studied projects used a dedicated performance test framework, such as JMH or Google Caliper. Most projects use a unit test framework to conduct their performance tests. One possible explanation is that developers are trying hard to integrate their performance tests into the continuous integration processes. 

The main takeaway of our study

Our observations imply that developers are currently missing a “killer app” for performance testing, which would likely standardize how performance tests are conducted, in the same way as JUnit standardized unit testing for Java. An ubiquitous performance testing tool will need to support performance tests on different levels of abstraction (smoke tests versus detailed microbenchmarking), provide strong integration into existing build and CI tools, and support both, extensive testing with rigorous methods as well as quick-and-dirty tests that pair reasonable expressiveness with being fast to write and maintain even by developers who are not experts in software performance engineering.

References

[1] M. Aberdour. Achieving quality in open-source software. IEEE Software. 2007.
[2] C. Heger, J. Happe, and R. Farahbod. Automated Root Cause Isolation of Performance Regressions During Software Development. In Proceedings of the 4th ACM/SPEC International Conference on Performance Engineering (ICPE). 2013.
[3] P. Leitner and C.-P. Bezemer. An exploratory study of the state of practice of performance testing in Java-based open source projects. In Proceedings of the 8th ACM/SPEC on International Conference on Performance Engineering (ICPE). 2017. 
[4] P. Stefan, V. Horky, L. Bulej, and P. Tuma. Unit testing performance in Java projects: Are we there yet? In Proceedings of the 8th ACM/SPEC on International Conference on Performance Engineering (ICPE). 2017.

If you like this article, you might also enjoy reading:

[1] Jose Manuel Redondo, Francisco Ortin. A Comprehensive Evaluation of Common Python Implementations. IEEE Software. 2015.
[2] Yepang Liu, Chang Xu, Shing-Chi Cheung. Diagnosing Energy Efficiency and Performance for Mobile Internetware Applications. IEEE Software. 2015.
[3] Francisco Ortin, Patricia Conde, Daniel Fernández Lanvin, Raúl Izquierdo. The Runtime Performance of invokedynamic: An Evaluation with a Java Library. IEEE Software. 2014.

    Monday, July 10, 2017

    Crowdsourced Exploration of Mobile App Features: A Case Study of the Fort McMurray Wildfire

    By: Maleknaz Nayebi @MaleknazNayebi
    Associate editor: Federica Sarro @f_sarro


    Software products have an undeniable impact on people's daily life. However, software can only help if it matches user’s needs. Often, up to 80% of software features are never or almost never used. To bring the real impact into the society, understanding the specific needs of the users is critical. Social media provide such opportunity to a good extent.

    This is a post summarizing the main idea of an ICSE 2017 SEIS track paper titled "Crowdsourced exploration of mobile app features: a case study of the Fort McMurray wildfire". The two interviews complement the description and highlight the results. 

    We gathered the online communications of Albertans about Fort McMurray fire at the time of this crisis. People formed unofficial online support groups on Facebook and Twitter trying to distribute and reply to the needs of evacuees. For example, for sharing a car, and fuel, baby clothes, getting information about road traffic and gas station lineups, reporting incidents or criminal movements and so on they put a post on Twitter or Facebook and add #yymfire or #FortMcMurray fire hashtags. Then, other members following these hashtags offered them help. In the case of emergency situations (such as natural disaster or man-made attacks), a cell phone may suddenly become the victims only resources.

    We developed a method called MAPFEAT to gather and analyze social media posts. With MAPFEAT we elicit requirements from the unstructured communication and automatically map them into an app feature already existing in one of the apps of the whole app store. By evaluating these features through crowdsourcing, MAPFEAT ranks and prioritizes the app features that expect the highest match with user needs and thus should be included in a software application. 

    In the case of Fort McMurray fire, we analyzed almost 70,000 tweets and mapped them into app features using MAPFEAT. We compared the features we had with the features already existing in 26 emergency apps.  The results showed that none of the top 10 most essential features for victims is available in any of the 26 apps. Among top 40 essential features as we gathered, only six was provided by some of the existing wildfire apps. In total, we mined 144 features, and 85% of them were evaluated as essential and worthwhile by the general public.

    The mismatch between user’s requirements and software design is a well-known software engineering problem. With the lightweight and high capacity cell devices, now software engineering is, more than ever, a way to help people in solving problems. This would be only possible if we find solutions to involve the general public in the decision process.  

    Links to interviews:

    Fort McMurray wildfire tweets show key info missing during disasters: Alberta study

    Evacuees struggled for answers during Fort McMurray wildfire: U of C

    MAPFEAT is a series of analytical techniques and AI methods. For more information please refer to: Nayebi, M., Quapp, R., Ruhe, G., Marbouti, M., & Maurer, F. (2017, May). Crowdsourced exploration of mobile app features: a case study of the Fort McMurray wildfire. In Proceedings of the 39th International Conference on Software Engineering: Software Engineering in Society Track (pp. 57-66). IEEE Press.

    Monday, June 26, 2017

    On the rise of Casual Contributions in postmodern Software Development - How does it impact Software Quality

    By: Naveen N Kulkarni
    Associate Editor: Sridhar Chimalakonda (@ChimalakondaSri)

    Postmodernism, a term used in variety of disciplines like arts, literature, fashion, music, movies and technology, is used to describe a tendency of philosophical movements and resulting shifts where there is a rejection of the predecessor. In case of software engineering too, the postmodernistic views have challenged the rigidity of traditional approaches; much like agile methods challenged the need for upfront requirement gathering, perpetual beta cycles where users are considered co-developers or heterogeneous application stack replaced with homogeneous stacks like Node.js and Vert.x. In case of collaboration in software development we see breaking the norm of controlled contributions to shared contributions to fragmented contributions combined with social underpinnings.

    In context of Open Source Software (OSS), empirical studies have shown that developers aim to rapidly deliver ’credible promises’ to keep their projects viable and active [Sojer, Haefliger]. With social interactions gaining popularity on the Internet, casual contributions (popularly known as pull requests on platforms such as, GitHub and BitBucket) to OSS is increasing. Empirical studies suggest that such contributions are not trivial, but involves bug fixes, refactoring and new features. While an OSS can deliver its promise faster with such requests, it can can quickly overwhelm the developers evaluating them whether to accept or not in case there are many. In order to deliver faster, it is intuitive that developers choose project relevance and code quality as the primary factors to evaluate the pull requests (substantiated in a recent survey by Gousios et al.). To minimize the effort, many OSS projects use strict policies that include discussions, linting, clean merge, tests and code reviews; most of which are automated through continuous integration tools. As the casual contributions can be unplanned and opportunistic, attempts are made by the development 1 2 team to understand the context of contribution through prior discussions. For example, in case of Scala, pull requests are accepted only if it is discussed on ’scala-internals’ mailing list. Also, mandatory discussions with more than one developers is required depending on the complexity of the contribution. All these are undertaken as a part of the quality assessment of the casual contribution. However, with such extensive checks and large volumes, the pull requests can quickly get stale, making it even harder for developers and contributors to understand the impact of composing multiple parallel contribution.

    Survey of work practices in managing and integrating pull request suggest that every contribution requires multiple code review sessions. This is substantiated by the fact that code reviews require ”code reading” to find defects, which is greatly dependent on individual expertise. Unfortunately, though they are perceived as a best practice, they are often localized and informal. Today’s code review session include interactive sessions, discussions and in-line comments. A study of modern code review process show that 35% of the review suggestions are discarded and 23% of the changes are applied after review (Beller M et al.). It is suggested that this process can be more people independent in postmodern software engineering. On the quality, there are very few qualitative evidences relating to code review process. Recent study by McIntosh et al. suggest that review coverage as an important factor to ensure quality. However, quality criteria are subjective during code reviews. Quality criteria is often restricted to statically analyzed source code using popular continuous integration tools (like Sonar or Squale), coding style and test cases.

    Despite the challenges, we observe casual contributions are raising. For example, at the time of writing this post, the rails and redis projects have nearly 500 open pull requests. The bootstrap, npm, tensorflow and docker have nearly 100 open pull requests. More detailed analysis on the growth of pull requests can be found at http://ghtorrent.org/pullreq-perf/ and https://octoverse.github.com/. As we adopt different approaches to compose software with frequent fragmented contributions, we are faced with challenges, such as emergent behavior, constraint violations and conflicting assumptions. These challenges arise due to the parallel and isolated nature of the contributions. Though these challenges are not new, we believe the current practices on quality assessments are inadequate to address them. The techniques used today help developers quickly spot local mismatches but they are not effective for comprehending global mismatches that originate due to many complex relationship among the software elements. In our initial study, we found similar evidence when a opportunistically reused external module not only violated the existing constraints, but also added unanticipated behavior making the software restrictive.

    Attempts to address the global mismatches using architectural reasoning and design decisions approaches have so far has met with limited success. Also, during code reviews are insufficient to highlight the ramifications of a contribution on the global constraints and mismatches. To overcome this issue, the merge process should extend beyond syntactic merge to include techniques for identifying constraint mismatches. We believe, by validating the constraint synthesized as predicates from the hot paths (critical execution flow) can optimally help validating the global mismatches. In past, predicate abstractions are effectively used for software verification (SLAM). On one hand, the predicate abstraction can facilitate theoretical guarantees, synthesizing the predicates can be overwhelming due to state explosion. To overcome this we can use descriptive statistical models (created from the features available in the source code) to choose a subset of predicates sufficient for the verification. Alternatively, mining of software repository techniques can play pivotal role in developing alternative approaches where they not only provide factual data, but help in decision process. Mining constraints as a set of sequential patterns (SPADE, a popular sequence mining algorithm) can be used where comparing sequence for dissimilarities between the source codes can suggest mismatches.

    There is a need for lightweight processes to cope with the dramatic shifts in the postmodern software engineering. In case of casual contributions, defining code quality can be confounding due to their inherent isolated nature. So, code reviews are critical and there is a need for alternative approaches that account for the various postmodern shifts. Without this the software quality in postmodern software development will remain elusive.

    References

    • SojerM, Henkel J. Code reuse in open source software development: quantitative evidence, drivers, and impediments. Journal of the Association for Information Systems 2010; 11(12):868901.
    •Stanciulescu S, Schulze S,Wasowski A. Forked and integrated variants in an open-source firmware project. Proceedings of the IEEE Int’l Conference on Software Maintenance and Evolution, ICSME 15, Bremen, Germany, 2015; 151160.
    • Perry DE, Siy HP, Votta LG. Parallel changes in large-scale software development: an observational case study. ACM Transactions on Software Engineering and Methodology (TOSEM) 2001; 10(3):308337.
    • Gousios G, Zaidman A, Storey MA, Deursen Av. Work Practices and Challenges in Pull-Based Development: The Integrators Perspective. Proceedings of the 37th International Conference on Software Engineering, 2015, vol. 1, pp. 358368.
    • Scala Pull request Policy, http://docs.scala-lang.org/scala/pull-request-policy.html, accessed 15-Mar-2017.
    • Pinto G, Steinmacher I, Gerosa MA. More common than you think: An in-depth study of casual contributors. Proceedings of the 23rd IEEE Intl Conference on Software Analysis, Evolution, and Reengineering, SANER 16, Suita, Osaka, Japan, 2016; pp. 112123. 4
    • Beller M, Bacchelli A, Zaidman A, Juergens E. Modern Code Reviews in Open-Source Projects: Which Problems Do They Fix? In Proceedings of Mining Software Repository, MSR’14, Hyderabad, India, 2014.

    Sunday, June 18, 2017

    When and Which Version to Adopt a Library: A Case Study on Apache Software Foundation Projects

    By: Akinori Ihara, Daiki Fujibayashi, Hirohiko Suwa, Raula Gaikovina Kula, and Kenichi Matsumoto (Nara Institute of Science and Technology, Japan)
    Associate editor: Stefano Zacchiroli (@zacchiro)

    Are you currently using a third-party library in your system? How did you decide the version and when to adapt the library? Is it the latest version or the older (reliable) version that you adopted? Do you plan to update and would you trust the latest version? These are all tough questions with no easy answers.

    A software library is a collection of reusable programs, used by both industrial and open software client projects to help achieve shorter development cycles and higher quality software [1]. Often enough, most active libraries release newer and improved versions of their libraries to fix bugs, keep up with the latest trends and showcase any new enhancements. Ideally, any client user of a library would adopt the latest version of that library immediately. Therefore, it is recommended that a client project should upgrade their outdated versions as soon a new release becomes available.

    Developers do not always select to adopt the latest version over previous versions

    As any practitioner is probably well-aware, adoption of the latest version is not as trivial as it sounds, and may require additional time and effort (i.e., adapting code to facilitate the new API and testing) to ensure successful integration into their existing client system. Developers of client projects are especially wary of library projects that follow a rapid-release style of development, since such library projects are known to delay bug fixes [2]. In a preliminary analysis, we identified two obstacles that potentially demotivate client users from updating:
    1. Similar client users are shown not to adopt new version shortly after it is released and that
    2. there is a delay between the library release and its adoption by similar clients.
    These insights may indicate client users are likely to 'postpone' updating until a new release is deemed to become 'stable'. In this empirical study, we aim to investigate how libraries are selected in relation to their release cycles.

    Design: We analyze when and which library versions are being adopted by client users. From 4,815 libraries, our study focuses on the 23 most frequent Apache Software Foundation (ASF) libraries used by 415 software client projects [3].

    Figure 1: distribution of the periods between releases in each library

    When to adapt a library?: We find that not all 23 libraries were yearly released (see Figure 1). Some library projects (e.g., jetty-server, jackson-mapper-asl, mockito-all) often release new versions within the year (defined as quick-release libraries), while others (e.g., commons-cli, servlet.api, commons-logging) take over a year to come out with a release (defined as late-release libraries). We found that these more traditional and well-established (i.e., older than 10 years) projects were the late-release libraries, while newer, beginner projects belonged to the quick-release libraries.


    Figure 2: Percentage of client users to select the latest version (gray) and the previous version (black)

    Which version to adopt?: Software projects do not always adopt new library versions in their projects (se Figure 2). Interestingly, we found that some client users of a late-release library would first select the latest version as soon as it was released, only to later on downgrade to a previous version (Figure 2: Red box and blue box shows the percentage of client users which performed downgrade after adapting the latest version or the previous version).

    Lessons Learnt: From our study, we find that client users may postpone updates until a library is deemed to become stable and reliable. Although quality of most open source software would often improve by minor and micro release changes, the study finds that client projects may wait, especially in the case of a late-release library. Our study validates the notion that library updates is not trivial. We find that practitioners are indeed careful when it comes to adopting the latest version, as they may include dependency problems and potentially untested bugs.

    We presented this study in International Conference on Open Source Systems (OSS'17). For more details, please see the preprint and the presentation from our website: http://akinori-ihara.jpn.org/oss2017/

    [1] Frank McCarey, Mel Ó Cinnéide, and Nicholas Kushmerick, "Knowledge reuse for software reuse," Journal of Web Intelligence and Agent Systems, pp.59-81, Vol.6, Issue.1, 2008.
    [2] Daniel Alencar da Costa, Surafel Lemma Abebe, Shane McIntosh, Uira Kulesza, Ahmed E Hassan "An Empirical Study of Delays in the Integration of Addressed Issues," In Proc. of the 30th IEEE International Conference on Software Maintenance and Evolution (ICSME'14), pp.281-290, 2014.
    [3] Akinori Ihara, Daiki Fujibayashi, Hirohiko Suwa, Raula Gaikovina Kula, and Kenichi Matsumoto, "Understanding When to Adapt a Library: a Case Study on ASF Projects," In Proc. of the International Conference on Open Software Systems (OSS'17), pp.128-138, 2017.

    Monday, June 12, 2017

    Supporting inclusiveness in diverse software engineering teams with brainstorming

    By: Anna FilippovaCarnegie Mellon University. USA (@anna_fil)

    Associate Editor: Bogdan Vasilescu, Carnegie Mellon University. USA (@b_vasilescu)


    Diversity continues to be one of the most talked about issues in software engineering. It is a paradox – we understand that diversity is important not just for equity and increasing the pool of available candidates, but because it improves the quality of engineering. However, in practice, diverse teams struggle with the very thing that makes them so important – voicing differing or dissenting opinions. Because the benefits of diversity depend on everyone speaking up, it is important to create supportive group processes that ensure all team members can voice their opinions without fear of judgement or being ignored.


    In this post, we describe one strategy that is likely already in an engineering manager’s toolkit – brainstorming.

    The diversity paradox
    It is well established that diverse teams are more creative and better at problem solving because they can leverage varied life experiences to make unexpected connections and avoid groupthink through constructive criticism. They are therefore particularly important in contexts where creative problem solving is required, such as in solving engineering challenges. The advantages of diversity come not only from inherent traits (such as someone’s gender, or race), but also through acquired experiences (like education or living in different places), and it is important to support both in teams.

    However, numerous research has shown that diverse teams struggle with leveraging their full potential – in unconstructive environments, team members who are in a minority struggle with feelings of intimidation or being ignored, while clashes in backgrounds between different factions in a team result in misunderstanding, suspicion and conflict. In the short term, this impacts the effectiveness of diverse teams, while in the long-term it could lead to greater intentions to leave the software engineering profession for minorities, especially in the early stages of their careers.

    While we have made significant strides in improving representation at different levels of the pipeline, representation alone does not guarantee an effective team. It is important to think beyond supporting diversity through numbers alone, towards inclusive group processes through which minority individuals and challenging opinions are not only welcomed, but systematically integrated into the bigger picture.  

    Brainstorming: an accessible strategy for diverse teams

    Though we can take several different approaches towards more inclusive group processes, it is helpful to consider strategies managers may already be familiar with. Brainstorming is one such well-known technique designed to support innovation in teams with 4 core principles:
    1)    Focusing on idea generation and discussion in a way that
    2)    withholds judgement, and
    3)    supports any ideas no matter how controversial, while
    4)    encouraging the integration of all the ideas proposed rather than discarding them.

    In other words, brainstorming supports exactly the kind of environment minority members of diverse teams need in order to feel comfortable voicing dissenting opinions without fear of judgement, criticism or being ignored. Despite this promise, little empirical work had looked at the impact of brainstorming on teamwork in diverse groups to-date.

    In a recent study, we observed the effects of brainstorming on satisfaction in a short-term, time intensive group work setting. Our study involved 144 participants across two non-competitive hackathons in the software engineering domain.  

    We found that brainstorming supported 1) better satisfaction with the process of working in the team and 2) a clearer vision of the team goals for all team members, regardless of their minority status, but the effect was significantly stronger for minority team members.

    Without brainstorming, team members who described feeling like a minority in their group (we did not distinguish between inherent and acquired traits) felt less satisfied with the process of working in their groups, and were less clear about what their group aimed to produce, compared to their teammates. However, as Figures 1 and 2 illustrate, in teams that did utilize brainstorming, minority team members matched their teammates in terms of satisfaction and alignment with group goals.
    Figure 1 The impact of brainstorming on satisfaction with working in the team by participant minority status
    Figure 2 The impact of brainstorming on goal clarity by participant minority status 

    Key takeaways

    Brainstorming is a readily available technique that managers are likely already familiar with, and, as our findings suggest, helps diverse teams work better together. In fact, because brainstorming supports satisfaction and a clearer vision of the team goals for all members of the team, there is reason to take a second look at the technique even if you are not yet managing a diverse team.

    References:

    Nigel Bassett-Jones (2005), The Paradox of Diversity Management, Creativity and Innovation. Creativity and Innovation Management, 14: 169–175.

    Anna Filippova, Erik Trainer, James D. Hersbleb (2017) From diversity by numbers to diversity as process: supporting inclusiveness in software development teams with brainstorming. In Proceedings of the 39th International Conference on Software Engineering, ACM, New York.

    Elizabeth Mannix, Margaret A. Neale, (2005). What differences make a difference? The promise and reality of diverse teams in organizations. Psychological science in the public interest, 6(2), 31-55.

    Alex Osborn (1957) Applied imagination: Principles and procedures of creative problem-solving. C. Scribner’s Sons; Revised second edition.

    Carroll Seron, Susan S. Silbey, Erin Cech, Brian Rubineau (2016) Persistence Is Cultural: Professional Socialization and the Reproduction of Sex Segregation. Work and Occupations, 43:2, pp. 178 – 214.

    William A. Wulf. (2002), The Importance of Diversity in Engineering
    in Diversity in Engineering: Managing the Workforce of the Future. The National Academy of Engineering (eds.) Washington, DC: The National Academies Press.