Verifying Recurrent Neural Networks Using Invariant Inference

Verifying Recurrent Neural Networks Using Invariant Inference” by Yuval Jacoby, Clark Barrett, and Guy Katz. In Proceedings of the 18^th International Symposium on Automated Technology for Verification and Analysis (ATVA '20), (Dang Van Hung and Oleg Sokolsky, eds.), Oct. 2020, pp. 57-74.

Abstract

Deep neural networks are revolutionizing the way complex systems are developed. However, these automatically-generated networks are opaque to humans, making it difficult to reason about them and guarantee their correctness. Here, we propose a novel approach for verifying properties of a widespread variant of neural networks, called recurrent neural networks. Recurrent neural networks play a key role in, e.g., speech recognition, and their verification is crucial for guaranteeing the reliability of many critical systems. Our approach is based on the inference of invariants, which allow us to reduce the complex problem of verifying recurrent networks into simpler, non-recurrent problems. Experiments with a proof-of-concept implementation of our approach demonstrate that it performs orders-of-magnitude better than the state of the art.

BibTeX entry:

@inproceedings{JBK20,
   author = {Yuval Jacoby and Clark Barrett and Guy Katz},
   editor = {Dang Van Hung and Oleg Sokolsky},
   title = {Verifying Recurrent Neural Networks Using Invariant Inference},
   booktitle = {Proceedings of the {\it 18^{th}} International Symposium
	on Automated Technology for Verification and Analysis (ATVA '20)},
   series = {Lecture Notes in Computer Science},
   volume = {12302},
   pages = {57--74},
   publisher = {Springer International Publishing},
   month = oct,
   year = {2020},
   doi = {10.1007/978-3-030-59152-6_3},
   url = {http://theory.stanford.edu/~barrett/pubs/JBK20.pdf}
}

(This webpage was created with bibtex2web.)