DeepSafe: A Data-driven Approach for Assessing Robustness of Neural Networks

DeepSafe: A Data-driven Approach for Assessing Robustness of Neural Networks” by Divya Gopinath, Guy Katz, Corina S. P\u{a}s\u{a}reanu, and Clark Barrett. In Proceedings of the 16^th International Symposium on Automated Technology for Verification and Analysis (ATVA '18), (Shuvendu Lahiri and Chao Wang, eds.), Oct. 2018, pp. 3-19. Los Angeles, California.

Abstract

Deep neural networks have achieved impressive results in many complex applications, including classification tasks for image and speech recognition, pattern analysis or perception in self-driving vehicles. However, it has been observed that even highly trained networks are very vulnerable to adversarial perturbations. Adding minimal changes to inputs that are correctly classified can lead to wrong predictions, raising serious security and safety concerns. Existing techniques for checking robustness against such perturbations only consider searching locally around a few individual inputs, providing limited guarantees. We propose DeepSafe, a novel approach for automatically assessing the overall robustness of a neural network. DeepSafe applies clustering over known labeled data and leverages off-the-shelf constraint solvers to automatically identify and check safe regions in which the network is robust, i.e. all the inputs in the region are guaranteed to be classified correctly. We also introduce the concept of targeted robustness, which ensures that the neural network is guaranteed not to misclassify inputs within a region to a specific target (adversarial) label. We evaluate DeepSafe on a neural network implementation of a controller for the next-generation Airborne Collision Avoidance System for unmanned aircraft (ACAS Xu) and for the well known MNIST network. For these networks, DeepSafe identified many regions which were safe, and also found adversarial perturbations of interest.

BibTeX entry:

@inproceedings{GKP+18,
   author = {Divya Gopinath and Guy Katz and Corina S. P\u{a}s\u{a}reanu
	and Clark Barrett},
   editor = {Shuvendu Lahiri and Chao Wang},
   title = {DeepSafe: A Data-driven Approach for Assessing Robustness of
	Neural Networks},
   booktitle = {Proceedings of the {\it 16^{th}} International Symposium
	on Automated Technology for Verification and Analysis (ATVA '18)},
   series = {Lecture Notes in Computer Science},
   volume = {11138},
   pages = {3--19},
   publisher = {Springer},
   month = oct,
   year = {2018},
   isbn = {978-3-030-01090-4},
   doi = {10.1007/978-3-030-01090-4_1},
   note = {Los Angeles, California},
   url = {http://theory.stanford.edu/~barrett/pubs/GKP+18.pdf}
}

(This webpage was created with bibtex2web.)