Publication
Shin Hwei Tan, Hiroaki Yoshida, Mukul Prasad and Abhik Roychoudhury
Foundations of Software Engineering (FSE) [FSE Paper], 2016 [FSE Slide], 2016
Prevalence of anti-patterns
Although various search-based program repair techniques show promising results in generating a large number of patches, prior studies show that most of these patches are often only plausible but incorrect. Specifically, SPR generates 28 out of 40 (i.e, 70%) plausible but incorrect patches, while GenProg generates 50 out of 53 (i.e., 94.33%) plausible patches, for the GenProg benchmarks.
To better understand the nature of the plausible patches, we performed a manual inspection on all the machine-generated patches produced by SPR and GenProg (including plausible and correct patches) as well as on the correct developer-provided patches for these bugs. Specifically, we manually analyzed each patch and attempted to answer two questions:
- Q1:
- What makes a given patch plausible? Why is it incorrect (i.e., does not capture the semantics of the developer-provided patch)?
- Q2:
- Do the plausible patches, as a whole, share any common syntactic features that explain their "plausibility'' as well as distinguish them from the pool of correct patches (human as well as machine generated)?
Table that shows the anti-patterns found in patches generated by GenProg AE
Table that shows the anti-patterns found in patches generated by SPR
Experimental Data
Data for GenProg and mGenProg on the Corebench subjects
Data for SPR and mSPR on the Corebench subjects
Contact
Contact Shin Hwei Tan (@anti-patterns) for information about antipatterns.