“Toward Certified Robustness Against Real-World Distribution Shifts” by Haoze Wu, Teruhiro Tagomori, Alexander Robey, Fengjun Yang, Nikolai Matni, George Pappas, Hamed Hassani, Corina P{\u{a}}s{\u{a}}reanu, and Clark Barrett. In Proceedings of the 2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML), (Patrick McDaniel and Nicolas Papernot, eds.), Feb. 2023, pp. 537-553. Raleigh, NC.
We consider the problem of certifying the robustness of deep neural networks against real-world distribution shifts. To do so, we bridge the gap between hand-crafted specifications and realistic deployment settings by considering a neural-symbolic verification framework in which generative models are trained to learn perturbations from data and specifications are defined with respect to the output of these learned models. A pervasive challenge arising from this setting is that although S-shaped activations (e.g., sigmoid, tanh) are common in the last layer of deep generative models, existing verifiers cannot tightly approximate S-shaped activations. To address this challenge, we propose a general meta-algorithm for handling S-shaped activations which leverages classical notions of counter-example-guided abstraction refinement. The key idea is to “lazily” refine the abstraction of S-shaped functions to exclude spurious counter-examples found in the previous abstraction, thus guaranteeing progress in the verification process while keeping the state-space small. For networks with sigmoid activations, we show that our technique outperforms state-of-the-art verifiers on certifying robustness against both canonical adversarial perturbations and numerous real-world distribution shifts. Furthermore, experiments on the MNIST and CIFAR-10 datasets show that distribution-shift-aware algorithms have significantly higher certified robustness against distribution shifts.
BibTeX entry:
@inproceedings{WTR+23, author = {Haoze Wu and Teruhiro Tagomori and Alexander Robey and Fengjun Yang and Nikolai Matni and George Pappas and Hamed Hassani and Corina P{\u{a}}s{\u{a}}reanu and Clark Barrett}, editor = {Patrick McDaniel and Nicolas Papernot}, title = {Toward Certified Robustness Against Real-World Distribution Shifts}, booktitle = {Proceedings of the 2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML)}, pages = {537--553}, publisher = {IEEE}, month = feb, year = {2023}, doi = {10.1109/SaTML54575.2023.00042}, note = {Raleigh, NC}, url = {http://theory.stanford.edu/~barrett/pubs/WTR+23.pdf} }
(This webpage was created with bibtex2web.)