We report on the organization, challenges, and results of the ninth edition of the Java Unit Testing Competition as well as the first edition of the Cyber-Physical Systems Testing Tool Competition. Java Unit Testing Competition. This year, five tools, Randoop, UtBot, Kex, Evosuite, and EvosuiteDSE, were executed on a benchmark with (i) new classes under test, selected from three open-source software projects, and (ii) the set of classes from three projects considered in the eighth edition. We relied on an improved Docker infrastructure to execute the tools and the subsequent coverage and mutation analysis. Given the high number of participants, we considered only two time budgets for test case generation: Thirty seconds and two minutes. Cyber-Physical Systems Testing Tool Competition. Five tools, Deeper, Frenetic, GABExplore, GAB Exploit, and Swat, competed on testing self-driving car software by generating simulation-based tests using our new testing infrastructure. We considered two experimental settings to study test generators' transitory and asymptotic behaviors and evaluated the tools' test generation effectiveness and the exposed failures' diversity. This paper describes our methodology, the statistical analysis of the results together with the contestant tools, and the challenges faced while running the competition experiments.
SBST Tool Competition 2021
Riccio V.Co-primo
2021-01-01
Abstract
We report on the organization, challenges, and results of the ninth edition of the Java Unit Testing Competition as well as the first edition of the Cyber-Physical Systems Testing Tool Competition. Java Unit Testing Competition. This year, five tools, Randoop, UtBot, Kex, Evosuite, and EvosuiteDSE, were executed on a benchmark with (i) new classes under test, selected from three open-source software projects, and (ii) the set of classes from three projects considered in the eighth edition. We relied on an improved Docker infrastructure to execute the tools and the subsequent coverage and mutation analysis. Given the high number of participants, we considered only two time budgets for test case generation: Thirty seconds and two minutes. Cyber-Physical Systems Testing Tool Competition. Five tools, Deeper, Frenetic, GABExplore, GAB Exploit, and Swat, competed on testing self-driving car software by generating simulation-based tests using our new testing infrastructure. We considered two experimental settings to study test generators' transitory and asymptotic behaviors and evaluated the tools' test generation effectiveness and the exposed failures' diversity. This paper describes our methodology, the statistical analysis of the results together with the contestant tools, and the challenges faced while running the competition experiments.File | Dimensione | Formato | |
---|---|---|---|
riccio_SBSTtool2021.pdf
non disponibili
Tipologia:
Versione Editoriale (PDF)
Licenza:
Non pubblico
Dimensione
454.38 kB
Formato
Adobe PDF
|
454.38 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.