Thursday, December 17, 2015

Variability Management using Github fork-based development

by Stefan Stanciulescu, IT University of Copenhagen, Denmark (@scas_ITU)
Associate Editor: Sarah Nadi, Technische Universität Darmstadt, Germany (@sarahnadi)

It is often the case that software producers need to create different variants of their system to cater for different customer requirements or hardware specifications.
While there are systematic product line engineering methodologies that support variability (e.g., preprocessor, deltas, aspects, modules), software variants are often developed using clone-and-own (aka copy-paste) since it is a low-cost mechanism without a steep learning curve [1].
Recent collaboration tools such as Github and Bitbucket have made this clone-and-own process more systematic by introducing fork-based development.  In this development process, users fork a repository, which creates a traceability link between the two repositories, make changes on their own fork and push changes back to the repository from which they forked (upstream) via pull-requests.


Combining this with a powerful version control system such as Git, better and more efficient variant management can be achieved. The question is how is this done in practice and what difficulties does this process entail? To answer this question, we analyzed Marlin, a 4-year old Github-hosted project that combines clone-and-own with traditional variability mechanisms.


Marlin is a firmware for 3D printers written in C++ that employs variability both through its core code that uses preprocessor annotations, but also through its clones. Started in August 2011, it was forked by more than 2400 people, many of which contribute changes. This is unusual for such a small, recent project in a relatively narrow and new domain.
We looked at Marlin to understand how forking supports the creation of product variants


We sent a survey to 336 fork owners , and got answers from 57 of them. The questions we asked included (1) the criteria they use for integrating changes from the main code base (i.e., upstream) and (2) how they deal with variability. We use these answers to gain a perspective from the user side on the development and maintenance of the forks.

To merge or not to merge?

Fork owners: Most fork owners indicated that it is difficult to merge upstream changes, because their firmware becomes unstable and produces undesired results. In addition, configuring the software is meticulous due to the large number of features and parameters that need to be properly configured. A slight change in these parameters has consequences for the end-user. Therefore, many of the fork owners rarely sync with the main Marlin branch. Another point they made is that only some of the changes are interesting to them, and even though Git allows to selectively apply patches from upstream (cherry-picking), it is still difficult to select what should be merged from upstream and what not.


Marlin Maintainers: If we look from the Marlin maintainers perspective, more than 50% of the main Marlin branch commits came from forks. This suggests that merging changes in the opposite direction is more common. Forks allow users to innovate and bring new ideas to Marlin. However, integrating cloned variants is a difficult task for the maintainers. It is especially problematic when forks introduce new features and want to integrate them into the main codebase, as it may break other people’s variants. To handle this issue, all new feature and variant contributions are not integrated until they are introduced using preprocessor directives. These directives have to be disabled by default in the main configuration of the firmware. This is an attempt to lower the probability of affecting someone else’s variant, with the added side benefit of increasing the stability of the main codebase.


Additionally, maintainers need to ensure that the quality of the clone is within their requirements (implementation, tests, documentation, style adherence), and that they can handle the maintenance and evolution of that variant.  In Marlin, this is important as there are many hardware devices that can be used, making it complicated for maintainers to test new variants (which very often means they need to run the printer and print an object). Here is where the community goes the extra mile, with many users using their hardware and printers to test different variants and reporting any issues.
An important aspect is that when changes from a fork get integrated upstream, this fork becomes more popular and visible. For example, in a variant  (jcrocholl’s deltabot) there was only one pull-request accepted by the fork owner. When the variant got integrated, there were many more issues and pull-requests that dealt with that variant, and many more changes for that variant got accepted. Finally, jcrocholl’s maintenance efforts were reduced as he did not have to keep in sync with the main Marlin anymore, and push his changes back. Any change related to deltabot was done directly on the main Marlin repository.


Forking vs Preprocessors

In embedded systems, computational resources are limited. Many respondents from our survey explain that memory limitations (Marlin runs on 8-bit Atmega microcontrollers, that have between 4kB and 256kB of flash memory) pushed them to use preprocessor directives to allow excluding code at compilation time, and to experiment with different ideas.  On the other hand, we definitely see that forking is the way to go when fast prototyping is needed. It is also useful when changes are not relevant to the other involved people, or just to store configurations of a variant. The latter is heavily used in Marlin (around 200 forks only store configurations of the firmware), and it is a very light and efficient mechanism.


Lessons learnt

Based on the above survey results as well as more detailed analysis of the Marlin project structure and development history, we derive the following guidelines for fork-based development.


  • Fork to create variants and to support new configurations. It is easy, efficient and lightweight, and using Github’s forking, we get traceability for free.
  • Use preprocessor annotations for flexibility and to tackle memory constraints when needed, both in a fork and in the main branch.
  • Keep track of variants by adding a description for each fork created and maintain that description.
  • Merge upstream often to reduce maintenance and evolution efforts of cloned variants.


Recent tools and techniques (Git, Github, forking) can deal to some degree with the complex task of variant development. With the large adoption of Github, it seems that we are already heading towards that direction. Adopting new tools and techniques is a long process, and there are still many challenges that lie ahead, but we are one step closer in understanding how to offer better tool support for variant management.


References
[1] Yael Dubinsky, Julia Rubin, Thorsten Berger, Slawomir Duszynski, Martin Becker, Krzysztof Czarnecki. An Exploratory Study of Cloning in Industrial Software Product Lines. CSMR 2013: 25-34

More detailed information about Marlin, variants and its evolution can be found online. http://itu.dk/people/scas/papers/ICSME2015-Marlin-preprint.pdf

If you liked this article, you might also enjoy reading:

Wednesday, December 9, 2015

Why software reference architectures in agile projects are more than “just” templates

by Matthias Galster (mgalster@ieee.org)
Associate Editor: Mehdi Mirakhorli (@MehdiMirakhorli)

In one of our research projects we looked at how reference architectures are used in agile projects. Software engineers often use existing reference architectures as “templates” when designing systems in particular contexts (such as web-based or mobile apps). Reference architectures (from a third party or designed in-house) provide architectural patterns (elements, relationships, documentation, etc.), sometimes partially or fully instantiated, and therefore allow us to reuse design decisions that worked well in the past. For instance, a web services reference architecture may describe how a web service is developed and deployed within an organization’s IT ecosystem. On the other hand, industry practice tends towards flexible and lightweight development approaches1 and even though not all organizations are fully agile, many use hybrid approaches2. Since reference architectures shape the software architecture early on, they may constrain the design and development process from the very beginning and limit agility. Nevertheless, in case studies that we conducted with software companies that use Scrum as their agile process framework, engineers reported extra value when using reference architectures. That extra value goes beyond the typical reasons for using reference architectures (such as being able to use an architecture template, and supporting standardization and interoperability). This additional value comes from three things: architectural focus, less knowledge vaporization, and team flexibility.
  • Architectural focus. We found that reference architectures inject architectural thinking into the agile process. Architectural issues often get lost in agile projects and the architecture emerges implicitly. A reference architecture supports the idea of a system metaphor in agile development. The clear picture of core architectural issues helps communicate the shared architectural vision as a “reference point” within agile teams across sprints. Since reference architectures already confine the design space, they also help balance the effort spent on up front design. In fact, we have observed that this outweighs the effort required to learn about a reference architecture. Furthermore, reference architectures provide a “harness” for agile teams to try out different design solutions. This helps reduce the complexity of the design space and potentially limits the amount of architectural refactoring.
  • Less knowledge vaporization. Agile promotes working products over documentation. Reference architectures usually come with supporting artefacts and documentation, so large parts of the architecture don’t need to be documented and maintained separately. For example, if projects use NORA (a reference architecture for Dutch e-government), software engineers can focus on documenting product or organization-specific features rather than the whole architecture and all design decisions. In the example of NORA, this would include features and architecture artefacts implemented in individual municipalities.
  • Team flexibility. Reference architectures facilitate communication within and across Scrum teams since there is a shared understanding of common architectural ideas. We have found that this not only benefits individual teams, but also allows engineers to move across different projects and / or teams, and to work on more than one project at the same time (as long as the same reference architecture is used). This facilitates cross-functional teams, as promoted in agile practices.

The above list includes preliminary findings and there are certainly other benefits (and benefits related to software architecture in general), depending on a particular project situation. We also report more details in our paper “Understanding the Use of Reference Architectures in Agile Software Development Projects” published at the 2015 European Conference on Software Architecture.

References:


2L. Vijayasarathy, C. Butler: Choice of Software Development Methodologies – Do Project, Team and organizational Characteristics Matter? IEEE Software, in press.

You may also like:



Wednesday, December 2, 2015

When software crosses a line


 by Les Hatton and Michiel van Genuchten 
Editor: Mei Nagappan (@MeiNagappan)


We will expand on this story in an upcoming column in our Software Impact Series by which time we should know more details, but in the light of the rapidly unfolding story at Germany's giant automotive company, VW, we will add a new dimension to our original questions “Software: what's in it and what's it in ?” [1] by asking “What's hidden in it and how many people knew ?”.

At the time of writing, it would appear that aside from the normal and burgeoning functionality in the tens of millions of lines of code embedded in modern automotives (for example, Mossinger [2]), there may in some cases be code intended to 'deceive'.  The question, of course, is when does a feature cross the line from what lawyers refer to as harmless “advertiser's puff”, all the way to deceit?

In the case of VW, the change appears to have been tiny – just a few lines of code in what might be millions.  In essence, the software allegedly monitored steering movement whilst running.  On a test harness, the car wheels move but the steering wheel doesn't, in contrast to normal running where both are continually in motion.  By doing this, the software could detect when the car was in test mode and therefore, control the degree to which catalytic scrubbing was done on the emissions.  Catalytic scrubbers inject a mixture of urea and water into the diesel engine emissions converting harmful nitrogen oxides into the more benign molecules nitrogen, oxygen, water and small amounts of carbon dioxide.  The trade-off in a diesel engine is quite simply one of emission toxicity against car performance.  The software, now known as a 'defeat device' simply turned up the efficiency of the catalytic converters when it thought the car was under test.  To date, this is now believed to have been embedded in around 11 million VW cars and some 2 million Audis.

A cynical observer would claim that if somebody can get away with something, then they will but did the engineers responsible really believe that such a device would never be found?  After all, unless you knew what you were looking for, finding it by inspecting the code is comparable to finding a needle in a haystack, and even if you did know, finding your way around a giant software system is not for the faint-hearted.  However, you cannot defeat the laws of physics, or in this case, chemistry.  The VW defeat device was basically discovered by independent monitoring of exhaust emissions with glaring differences being found between what was observed in normal running and what was being claimed so it seems naïve to think it would not be discovered eventually.  In which case, did the engineers responsible think that people wouldn't mind or that the financial benefit of selling more cars would outweigh any potential downside ?  If they did, they are likely to be in for an unpleasant surprise with VW already setting aside several billion dollars to deal with potential claims.

As of 30-Sept-2015 when this part was written, it appears that over 1 million cars and vans could be affected in the UK, Europe's second biggest diesel user after Germany, but VW do not know.  In fact, they do not appear to know if the software is present or if so, whether it is activated, and nobody seems to have considered the possibility of breaking something else if the software is to be removed or even simply deactivated during software recall, due to unintentional side-effects.  These can occur through, for example, shared global variables, or one of a number of mechanisms which will be familiar to professional software engineers.  In short, it's removal could introduce one or more defects.

Speaking of defects, let's raise an interesting question.  Is this better or worse than releasing inadequately tested automotive software in general?  For example, one of the more recent examples of this is the Toyota unintended acceleration bug [3].  They are not alone as there have been numerous recalls in the automotive industry due to software defects.  When a car manufacturer releases such a bug whilst advertising how safe their cars are, are they not being similarly misleading?  For example, contrast the following two more factually appropriate sentences to cover these  eventualities.

We have adjusted the catalytic converter to behave more efficiently if you drive at constant speed without moving the steering wheel, so your emissions will be much lower.  If you depart from this, you will get better performance but your emissions will be very considerably more noxious.

and

We believe that software innovation is vital in automotive development, however, the systems we release to you are so complicated that they will have defects in them which might sometimes prejudice your safety.  Most of the time, however, we believe they will not.

Would you still buy the car?  It could, of course, be argued that these questions arise from different ethical viewpoints but any software engineer worth their salt will know that the chances of releasing a complicated defect-free software system are effectively negligible.

We await the answers to several obvious questions.  Are any other companies doing this, or if we take a more cynical standpoint, how many?  If not, are they using software practices almost as dubious?  How do we decide what is reasonable given the extraordinary ability of software to give hardware its character?  The CEO has already been replaced but what will happen to the engineers and the managers who were responsible?

We look forward to revisiting the story as more information comes to light.

Contextual Note: The Impact series in IEEE Software describes the impact of software on various industries. Around 30 columns have been published since 2010 by senior technical and business managers from companies companies such as Oracle, Airbus (software in the 380), Hitachi, MicroSoft and Vodafone. Others columns were provided by Cern (software behind the Higgs Boson discovery) and JPL (software in the Mars Lander). Michiel van Genuchten and Les Hatton are editors of the Impact column. The Impact column from Jan-Feb 2016 will contain and updated and more extensive version of this blog.

References

1. Genuchten, M. and Les Hatton (2010). “Software: What's In It and What's It In?”, IEEE Software, Jan/Feb 2010, 27(1): pp. 14-16

2. Mossinger J. (2010) “Software in Automotive Systems”, IEEE Software, 27(2).