Titelangaben
Rambau, Jörg ; Richter, Rónán R.C.:
Using MILPs for creating robust Adversarial Examples.
2023
Veranstaltung: BayLDS-Tag
, 10.02.2023
, Bayreuth.
(Veranstaltungsbeitrag: Workshop
,
Poster
)
Abstract
With Deep Neural Networks (DNNs) being used in more and more fields, including applications with higher security requirements or wider social implications, there is an increasing need to study the limitations and vulnerabilities of such networks. One way to mislead a DNN and thereby potentially causing harm is the utilization of Adversarial Examples. These are inputs for the DNN, that are close to ordinary instances of one category, equipped with small changes, sometimes even invisible for the human eye, such that the DNN erroneously determines a totally different category. Such Adversarial Examples may be artificially generated to actively fool a network or they may just randomly appear in applications, for example due to noise.
One way of systematically generating Adversarial Examples for DNNs consisting of multiple layers of rectified linear units is a MILP model, as it has been proposed by Fischetti and Jo (2018). Using these global optimization techniques bears the advantage, that, on top of generating the best Adversarial Example, one can proof, that there cannot be a better one. Thus, one can guarantee, that the DNN cannot be fooled by inputs with fewer manipulations.
The goal of our research is to go one step further by developing mathematical models for generating robust Adversarial Examples. These examples cannot be ruled out by minor modifications of the DNN, e.g. by slightly more training data. This allows us to investigate the general limitations of DNNs to a greater extend, since robust Adversarial Examples are valid for a whole class of DNNs.

bei Google Scholar