by Fabio Calefato, University of Bari, ITALY (@fcalefato), Filippo Lanubile, University of Bari, ITALY (@lanubile), and Nicole Novielli, University of Bari, ITALY (@NicoleNovielli)
Associate Editor: Christoph Treude (@ctreude)
Several thousands of developers daily head to Stack Overflow (SO) for asking technical questions, hoping to receive swift help and fix the issues that they have been facing. To increase the chances of getting help from others, the SO community provides members with detailed guidelines on how to write more effective questions (e.g., see [9]). These official recommendations also include those provided by Jon Skeet, the highest reputation member, whose guidelines have become over time a de facto standard for the community [10].
For example, SO states that the site is “all about getting answers. It's not a discussion forum. There's no chit-chat.” Thus, askers are recommended to avoid “greetings and sign-offs […], as they’re basically a distraction,” which are also supposed to be edited out by other users [10]. Still, many askers finish their questions showing gratitude in advance towards potential helpers. Why do they go against this explicit recommendation? Are they just unaware of it or do they feel that having a positive attitude may attract more potential solutions?
In our work [5], we provide an evidence-based netiquette for writing effective questions by empirically validating several SO guidelines, retrieved from both the community and previous empirical studies on Q&A sites. Specifically, we analyzed a dataset of 87K questions by combining a logistic regression analysis with a user survey, first, to estimate the effect of these guidelines on the probability of receiving a successful answer and, then, to compare their actual effectiveness to that perceived by SO users.
Actionable Factors for Good Questions
Findings: Evidence vs. User Perception
Table 1 reports the main findings of our study, from which some interesting observations emerge. For 4 out of the 8 guidelines studied, we found that the perceived and actual effectiveness match. In particular, SO users are correct to think that minding their tone (#1), providing snippets with examples (#2), and avoiding the inappropriate use of capital letters (#3) all increase the probability to receive a successful answer to their questions. Instead, the use of short titles is neither perceived nor found to be an effective guideline (#5).
Perhaps even more interesting considerations arise from the remaining cases of mismatch, though. First, SO users seem to be unaware that writing questions concisely (#4) and posting them during GMT evening hours (#8) increase the chances of a question to be resolved. Regarding efficiency times, Bosu et al. [4] have speculated that these time slices are the most successful because they correspond to the working time in the USA. Second, contrary to users’ perception, we found that providing context for questions by adding extra tags (#6) and including in the text URLs to external resources (#7) have no positive effects on the chance of getting a successful answer.
[1] T. Althoff, T., Danescu-Niculescu-Mizil, C., and Jurafsky, D. How to Ask for a Favor: A Case Study on the Success of Altruistic Requests. In Proc. of the 8th Int’l Conf. on Weblogs and Social Media (ICWSM’14), 2014.
[2] M. Asaduzzaman, A.S Mashiyat, C.K. Roy, K.A. Schneider. 2013. Answering questions about unanswered questions of Stack Overflow. In Proc. of the 10th IEEE Working Conf. on Mining Software Repositories (MSR 2013), pp. 97-100.
[3] B. Bazelli, A. Hindle, and E. Stroulia. On the Personality Traits of StackOverflow Users. In Proc. of Int’l Conference on Software Maintenance (ICSM '13), 2013, pp. 460-463.
[4] A. Bosu, C.S. Corley, D. Heaton, D. Chatterji, D.J. Carver, and N.A. Kraft. Building Reputation in StackOverflow: An Empirical Investigation. In Proc. of 10th IEEE Working Conf. on Mining Software Repositories (MSR’13), 2013, pp. 89-92.
[5] F. Calefato, F. Lanubile, N. Novielli. How to Ask for Technical Help? Evidence-based Guidelines for Writing Questions on Stack Overflow. Information and Software Technology, 2017.
[6] M. Duijn, A. Kučera, and A. Bacchelli. Quality questions need quality code: classifying code fragments on stack overflow. In Proc. of 12th Working Conf. on Mining Software Repositories (MSR '15), 2015, pp. 410-413.
[7] O. Kucuktunc, B.B. Cambazoglu, I. Weber, and H. Ferhatosmanoglu. A large-scale sentiment analysis for Yahoo! answers. In Proc. of 5th Int’l Conf. on Web Search and Data Mining (WSDM '12), 2012, pp. 633-642.
[8] L. Ponzanelli, A. Mocci, A. Bacchelli, and M. Lanza. Understanding and Classifying the Quality of Technical Forum Questions. In Proc. of 14th Int’l Conf. on Quality Software (QSIC’14), 2014, pp. 343-352.
[9] Stack Overflow Help Center, Be nice (last accessed: Nov. 2017).
[10] J. Skeet, 2010. Writing the Perfect Question (last accessed: Nov. 2017).
[11] C. Treude, O. Barzilay, and M-A. Storey. How do programmers ask and answer questions on the web? (NIER track). In Proc. of the 33rd Int’l Conf. on Software Engineering (ICSE '11), 2011, pp. 804-80.
Associate Editor: Christoph Treude (@ctreude)
Several thousands of developers daily head to Stack Overflow (SO) for asking technical questions, hoping to receive swift help and fix the issues that they have been facing. To increase the chances of getting help from others, the SO community provides members with detailed guidelines on how to write more effective questions (e.g., see [9]). These official recommendations also include those provided by Jon Skeet, the highest reputation member, whose guidelines have become over time a de facto standard for the community [10].
For example, SO states that the site is “all about getting answers. It's not a discussion forum. There's no chit-chat.” Thus, askers are recommended to avoid “greetings and sign-offs […], as they’re basically a distraction,” which are also supposed to be edited out by other users [10]. Still, many askers finish their questions showing gratitude in advance towards potential helpers. Why do they go against this explicit recommendation? Are they just unaware of it or do they feel that having a positive attitude may attract more potential solutions?
In our work [5], we provide an evidence-based netiquette for writing effective questions by empirically validating several SO guidelines, retrieved from both the community and previous empirical studies on Q&A sites. Specifically, we analyzed a dataset of 87K questions by combining a logistic regression analysis with a user survey, first, to estimate the effect of these guidelines on the probability of receiving a successful answer and, then, to compare their actual effectiveness to that perceived by SO users.
Actionable Factors for Good Questions
Findings: Evidence vs. User Perception
Table 1. The evidence-based netiquette for effective question writing on Stack Overflow. Guidelines are shown in bold when supported by evidence. |
# | Guideline | Success factor | Empirical support | User perception | Source |
---|---|---|---|---|---|
1 | Write questions using a neutral emotional style | Affect | Yes | Effective | Skeet [10], SO Help Center [9], Kucuktunc et al. [7], Bazelli et al. [3] |
2 | Provide sample code and data | Presentation quality | Yes | Effective | Skeet [10], Asaduzzaman et al. [2], Duijn et al. [6], Treude et al. [11] |
3 | Use capital letters where appropriate | Presentation quality | Yes | Effective | Skeet [10] |
4 | Be concise | Presentation quality | Yes | Ineffective | Skeet [10] |
5 | Use short, descriptive question titles | Presentation quality | No | Ineffective | Skeet [10] |
6 | Provide context through tags | Presentation quality | No | Effective | Skeet [10] |
7 | Provide context through URLs | Presentation quality | No | Effective | Ponzanelli et al. (partially) [8] |
8 | Be aware of low-efficiency hours | Time | Yes | Ineffective | Bosu et al. [4] |
Table 1 reports the main findings of our study, from which some interesting observations emerge. For 4 out of the 8 guidelines studied, we found that the perceived and actual effectiveness match. In particular, SO users are correct to think that minding their tone (#1), providing snippets with examples (#2), and avoiding the inappropriate use of capital letters (#3) all increase the probability to receive a successful answer to their questions. Instead, the use of short titles is neither perceived nor found to be an effective guideline (#5).
Perhaps even more interesting considerations arise from the remaining cases of mismatch, though. First, SO users seem to be unaware that writing questions concisely (#4) and posting them during GMT evening hours (#8) increase the chances of a question to be resolved. Regarding efficiency times, Bosu et al. [4] have speculated that these time slices are the most successful because they correspond to the working time in the USA. Second, contrary to users’ perception, we found that providing context for questions by adding extra tags (#6) and including in the text URLs to external resources (#7) have no positive effects on the chance of getting a successful answer.
Final Remarks
One of the greatest issues with Stack Overflow is the sheer number of unresolved questions. Currently, the site hosts almost 15 million questions of which about 7 million are still unresolved. Helping users ask “better” questions can increase the number of those resolved. One way to do so is increasing the awareness among community members about the existence of effective question-writing guidelines while also trimming down the list to only those supported by evidence.
For more information about our study, please refer to our paper “How to Ask for Technical Help? Evidence-based Guidelines for Writing Questions on Stack Overflow” [5].
References
References
[2] M. Asaduzzaman, A.S Mashiyat, C.K. Roy, K.A. Schneider. 2013. Answering questions about unanswered questions of Stack Overflow. In Proc. of the 10th IEEE Working Conf. on Mining Software Repositories (MSR 2013), pp. 97-100.
[3] B. Bazelli, A. Hindle, and E. Stroulia. On the Personality Traits of StackOverflow Users. In Proc. of Int’l Conference on Software Maintenance (ICSM '13), 2013, pp. 460-463.
[4] A. Bosu, C.S. Corley, D. Heaton, D. Chatterji, D.J. Carver, and N.A. Kraft. Building Reputation in StackOverflow: An Empirical Investigation. In Proc. of 10th IEEE Working Conf. on Mining Software Repositories (MSR’13), 2013, pp. 89-92.
[5] F. Calefato, F. Lanubile, N. Novielli. How to Ask for Technical Help? Evidence-based Guidelines for Writing Questions on Stack Overflow. Information and Software Technology, 2017.
[6] M. Duijn, A. Kučera, and A. Bacchelli. Quality questions need quality code: classifying code fragments on stack overflow. In Proc. of 12th Working Conf. on Mining Software Repositories (MSR '15), 2015, pp. 410-413.
[7] O. Kucuktunc, B.B. Cambazoglu, I. Weber, and H. Ferhatosmanoglu. A large-scale sentiment analysis for Yahoo! answers. In Proc. of 5th Int’l Conf. on Web Search and Data Mining (WSDM '12), 2012, pp. 633-642.
[8] L. Ponzanelli, A. Mocci, A. Bacchelli, and M. Lanza. Understanding and Classifying the Quality of Technical Forum Questions. In Proc. of 14th Int’l Conf. on Quality Software (QSIC’14), 2014, pp. 343-352.
[9] Stack Overflow Help Center, Be nice (last accessed: Nov. 2017).
[10] J. Skeet, 2010. Writing the Perfect Question (last accessed: Nov. 2017).
[11] C. Treude, O. Barzilay, and M-A. Storey. How do programmers ask and answer questions on the web? (NIER track). In Proc. of the 33rd Int’l Conf. on Software Engineering (ICSE '11), 2011, pp. 804-80.