Associate Editor: Brittany Johnson (@brittjaydlf)
The July/August 2016 issue of IEEE Software is packed with interesting papers, with a focus on software quality and the human elements' role in improving software quality. Despite so many great papers and interesting topics, I wound up reading two papers from top to bottom:
- "The Weakest Link" by Gerard J. Holzmann, and
- "Test Better by Exploring. Harnessing Human Skills and Knowledge" by Itkonen and colleagues
Numerous papers caught my attention just from the title... "Obstanovka: Exploring Nearby Space," "Exploiting Big Data's Benefits," and "Examining the Relationship between FindBugs Warnings and App Ratings" to name a few. However, it wasn't until I spoke with a colleague of mine that I was able to decide on a couple of papers to focus in on. After narrowing my selections down to a handful, I asked her, as someone with an academic and research background, which two she would be more interested in reading or hearing about.
These two papers make similar yet contradictory points regarding how we can improve software development and quality. While Test Better By Exploring proposes introducing more (specialized) humans into the software development and quality assessment mix, The Weakest Link poses that humans may be introducing more problems than we can solve (in the context of trusting a computer with decision making rather than a human).
The former reminds me of a study my advisor conducted that compared software and game development called "Cowboys, ankle sprains, and keepers of quality: how is video game development different from software development?" 1. Though it is not explicitly stated in this article, I think, as with Cowboy and Ankle Sprains, there is an implicit argument that we as software developers can learn and improve from other groups or types of development. Especially since an important part of game development is alpha and beta testing, which sound quite similar to the arguments being made in Test Better By Exploring. Being a human factors researcher, I recognize the benefits of incorporating this type of evaluation into all software development. The lab I work in does software tools research and we have found that one reason users may have so many problems with their software is because the software doesn't do what they expect. To know what users expect, or to deal with what they wouldn't expect more importantly, it seems necessary to include them in the process of creating and evaluating the software. Similar to heuristic evaluations, it seems like you can get the most out of this practice if you have a mix of different types of users, including non-user experts, to help shine a light on issues developers themselves may not encounter.
The topic of the latter has been a topic of discussion for a long time, as was mentioned in the article, and pervasive in pop culture (Will Smith's character's unwillingness to trust robots in I, Robot, the first artificially intelligent child being trusted by humans and robots in Artificial Intelligence A.I.). Most often the takeaway is, trust computers when human insights can be added or provided. The Weakest Link challenges this notion, suggesting that in some situations, computers could be effective at solving problems and making decisions without human interference. This topic becomes an even bigger debate when we think about how much of what we do on a day to day basis is being automated; self driving cars, robot vacuum cleaners, airplane autopilot, etc. Should we be putting more trust in our computers, slowly eliminating the human factor? Or is it possible that there is a plateau for the growth and dissemination of artificial intelligence as an every day part of our lives?
Overall, whether I agreed with all the points made or not, I thought both papers were a good read. Check out the July/August 2016 issue to see for yourself!
1 Murphy-Hill, Emerson, Thomas Zimmermann, and Nachiappan Nagappan. "Cowboys, ankle sprains, and keepers of quality: how is video game development different from software development?." Proceedings of the 36th International Conference on Software Engineering. ACM, 2014.
Post a Comment