We report on the organization, challenges, and results of the ninth edition of the Java Unit Testing Competition as well as the first edition of the Cyber-Physical Systems Testing Tool Competition. Java Unit Testing Competition. This year, five tools, Randoop, UtBot, Kex, Evosuite, and EvosuiteDSE, were executed on a benchmark with (i) new classes under test, selected from three open-source software projects, and (ii) the set of classes from three projects considered in the eighth edition. We relied on an improved Docker infrastructure to execute the tools and the subsequent coverage and mutation analysis. Given the high number of participants, we considered only two time budgets for test case generation: Thirty seconds and two minutes. Cyber-Physical Systems Testing Tool Competition. Five tools, Deeper, Frenetic, GABExplore, GAB Exploit, and Swat, competed on testing self-driving car software by generating simulation-based tests using our new testing infrastructure. We considered two experimental settings to study test generators' transitory and asymptotic behaviors and evaluated the tools' test generation effectiveness and the exposed failures' diversity. This paper describes our methodology, the statistical analysis of the results together with the contestant tools, and the challenges faced while running the competition experiments.

SBST Tool Competition 2021

Riccio V.
Co-primo
2021-01-01

Abstract

We report on the organization, challenges, and results of the ninth edition of the Java Unit Testing Competition as well as the first edition of the Cyber-Physical Systems Testing Tool Competition. Java Unit Testing Competition. This year, five tools, Randoop, UtBot, Kex, Evosuite, and EvosuiteDSE, were executed on a benchmark with (i) new classes under test, selected from three open-source software projects, and (ii) the set of classes from three projects considered in the eighth edition. We relied on an improved Docker infrastructure to execute the tools and the subsequent coverage and mutation analysis. Given the high number of participants, we considered only two time budgets for test case generation: Thirty seconds and two minutes. Cyber-Physical Systems Testing Tool Competition. Five tools, Deeper, Frenetic, GABExplore, GAB Exploit, and Swat, competed on testing self-driving car software by generating simulation-based tests using our new testing infrastructure. We considered two experimental settings to study test generators' transitory and asymptotic behaviors and evaluated the tools' test generation effectiveness and the exposed failures' diversity. This paper describes our methodology, the statistical analysis of the results together with the contestant tools, and the challenges faced while running the competition experiments.
2021
978-1-6654-4571-9
File in questo prodotto:
File Dimensione Formato  
riccio_SBSTtool2021.pdf

non disponibili

Tipologia: Versione Editoriale (PDF)
Licenza: Non pubblico
Dimensione 454.38 kB
Formato Adobe PDF
454.38 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11390/1240905
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 57
  • ???jsp.display-item.citation.isi??? 42
social impact