Driven by new software development process and testing on clouds, system and integration testing nowadays tends to produce enormous number of alarms. Such test alarms lay an almost unbearable burden on software testing engineers who have to manually analyze the causes of these alarms. The causes are critical because they decide which stakeholders are responsible to fix the bugs detected during the testing. In this paper, we present a novel approach that aims to relieve the burden by automating the procedure. Our approach, called Cause Analysis Model, exploits information retrieval techniques to efficiently infer test alarm causes based on test logs. We have developed a prototype and evaluated our tool on two industrial datasets with more than 14,000 test alarms. Experiments on the two datasets show that our tool achieves an accuracy of 58.3% and 65.8%, respectively, which outperforms the baseline algorithms by up to 13.3%. Our algorithm is also extremely efficient, spending about 0.1s per cause analysis. Due to the attractive experimental results, our industrial partner, a leading information and communication technology company in the world, has deployed the tool and it achieves an average accuracy of 72% after two months’ running.
Our technique provides a direction for some companies to analyze the test alarms
Since labeling root causes for test alarms needs professional knowledge about products. Considering it may be hard for researchers to label causes in Open Source Software, we’d like to share our practice and datasets to contribute the rising area of test log analysis for further improving the results of this problem.
You can download the datasets from the below link Datasets . The datasets contain an example of the test logs, a readme file, and test logs of the two datasets
If you have any questions about the datasets, please contact "li1989(at)mail.dlut.edu.cn".
All the experiments are conducted under this datasets. The 2-shingling modeling is constructed from the neightbour IDs in the datasets.