Anti-patterns in Search-based Program Repair

This repository contains the data for the "Anti-patterns in Search-based Program Repair" paper

Download as .zip Download as .tar.gz View on GitHub


  • Anti-patterns in Search-based Program Repair
    Shin Hwei Tan, Hiroaki Yoshida, Mukul Prasad and Abhik Roychoudhury
    Foundations of Software Engineering (FSE) [FSE Paper], 2016 [FSE Slide], 2016

  • Prevalence of anti-patterns

    Although various search-based program repair techniques show promising results in generating a large number of patches, prior studies show that most of these patches are often only plausible but incorrect. Specifically, SPR generates 28 out of 40 (i.e, 70%) plausible but incorrect patches, while GenProg generates 50 out of 53 (i.e., 94.33%) plausible patches, for the GenProg benchmarks.

    To better understand the nature of the plausible patches, we performed a manual inspection on all the machine-generated patches produced by SPR and GenProg (including plausible and correct patches) as well as on the correct developer-provided patches for these bugs. Specifically, we manually analyzed each patch and attempted to answer two questions:

    What makes a given patch plausible? Why is it incorrect (i.e., does not capture the semantics of the developer-provided patch)?
    Do the plausible patches, as a whole, share any common syntactic features that explain their "plausibility'' as well as distinguish them from the pool of correct patches (human as well as machine generated)?
    The aim was to find a compact set of syntactic features that are independent of the repair templates used by the tool.

    Table that shows the anti-patterns found in patches generated by GenProg AE

    Table that shows the anti-patterns found in patches generated by SPR

    Experimental Data

    Data for GenProg and mGenProg on the Corebench subjects

    Data for SPR and mSPR on the Corebench subjects


    Contact Shin Hwei Tan (@anti-patterns) for information about antipatterns.