Sunday 8 July 2012

Amplifying Tests to Validate Exception Handling Code


P. Zhang, S. Elbaum, Amplifying Tests to Validate Exception Handling Code, in: International Conference on Software Engineering, IEEE, Zurich, Switzerland, 2012, pp. 606-616. http://bit.ly/MSY8EE

This is another Distinguished Paper from ICSE 2012. I am not a particular expert on testing and tend not to go out of my way to read testing papers that to my mind all too often comprise small twists on established techniques. This paper tackles an important problem: testing exception handling code for dealing with external resources. The examples in the paper include for example a geo-tagging app, a media centre controller, a barcode scanner, a cloud synced password keeper and a voIP client. The difficulty is the variety and complexity of the error behaviours of these resources. The paper provides an automated approach that 'systematically amplifies' the test suite by exploring the space of exceptional behaviour. The target program is instrumented adding a 'mocking device' in place of the external resources of interest, it then amplifies a base test by exposing it to all possible  exceptions thrown by the resource while monitoring for program failures. The approach generates mocking patterns which are pruned for length and then anomalies are filtered to generate a report that can be actioned. The approach is described in some detail with an architecture. It is evaluated and the paper reports that it can detect 65% of the faults reported in bug reports related to this class of problem and is precise enough that 77% of the anomalies detected correspond to faults fixed by the developers. Take Home: The problem of using unreliable external resources, or incompletely understood APIs, can be partially addressed by thinking of it as a test coverage problem over the space of potential exceptional behaviour.

Sunday 1 July 2012

Characterizing and Predicting Which Bugs Get Reopened

T. Zimmermann, N. Nagappan, P. Guo, B. Murphy, Characterizing and Predicting Which Bugs Get Reopened, in:  International Conference on Software Engineering, IEEE, Zurich, Switzerland, 2012, pp. 1074-1083. http://bit.ly/N5gXTz

This paper was deservedly awarded an ACM Distinguished Paper award at ICSE 2012. From a team at Microsoft Research it looks at the effectiveness of bug fixes by examining bug reports that are 'reopened' after having been closed. The study presents the results of a large survey of Microsoft engineers and an analysis of bug reports, combining quantitative and qualitative approaches. It also includes a very thorough account of related work on the lifecycle of bugs that is worth reading in itself. Many of the causes of bug reopens are what you might expect: difficulties in reproducing and understanding the bug, or process errors such as fixing the bug in a wrong version and similar (these appear more frequently than you might expect). More interesting is that failures in prioritising bugs frequently results in 'bad closes'. In the associated analysis of the factors that influence bug reopens it is clear that the better the initial prioritisation, specifically if it involves a customer or an experienced developer, or if it is handled by a co-located team, the less likely the bug will be reopened. Take Home: If you want to avoid costly bug reopens ensure you have a good initial prioritisation process and keep the branching structure of the development as tight as possible to avoid process errors.

Manifesto

softeng.info will publish snappy summaries of interesting software engineering research. The aim is to support practice though researchers may find it useful too. The selection of papers is entirely personal. We welcome suggestions or comments. softeng.info is a collective. It was set up by Anthony Finkelstein (@profserious) who blogs at http://prof.so and whose research is at http://bit.ly/MKS514.