An empirical comparison of automated generation and classification techniques for object-oriented unit testing

Download: PDF, slides (PDF), slides (PowerPoint).

“An empirical comparison of automated generation and classification techniques for object-oriented unit testing” by Marcelo d'Amorim, Carlos Pacheco, Darko Marinov, Tao Xie, and Michael D. Ernst. In ASE 2006: Proceedings of the 21st Annual International Conference on Automated Software Engineering, (Tokyo, Japan), Sep. 2006, pp. 59-68.

Abstract

Testing involves two major activities: generating test inputs and determining whether they reveal faults. Automated test generation techniques include random generation and symbolic execution. Automated test classification techniques include ones based on uncaught exceptions and violations of operational models inferred from manually provided tests. Previous research on unit testing for object-oriented programs developed three pairs of these techniques: model-based random testing, exception-based random testing, and exception-based symbolic testing. We develop a novel pair, model-based symbolic testing. We also empirically compare all four pairs of these generation and classification techniques. The results show that the pairs are complementary (i.e., reveal faults differently), with their respective strengths and weaknesses.

Download: PDF, slides (PDF), slides (PowerPoint).

BibTeX entry:

@inproceedings{dAmorimPMXE2006,
   author = {Marcelo d'Amorim and Carlos Pacheco and Darko Marinov and Tao
	Xie and Michael D. Ernst},
   title = {An empirical comparison of automated generation and
	classification techniques for object-oriented unit testing},
   booktitle = {ASE 2006: Proceedings of the 21st Annual International
	Conference on Automated Software Engineering},
   pages = {59--68},
   address = {Tokyo, Japan},
   month = sep,
   year = {2006}
}

(This webpage was created with bibtex2web.)

Back to Michael Ernst's publications.