1 Introduction

State-of-the-art deep networks have recently been shown to be surprisingly unstable to adversarial perturbations (Szegedy et al. 2014). Unlike random noise, adversarial perturbations are minimal perturbations that are sought to switch the estimated label of the classifier. On vision tasks, the results of Szegedy et al. (2014) have shown that perturbations that are hardly perceptible to the human eye are sufficient to change the decision of a deep network, even if the classifier has a performance that is close to the human visual system. This surprising instability raises interesting theoretical questions that we initiate in this paper. What causes classifiers to be unstable to adversarial perturbations? Are deep networks the only classifiers that have such unstable behaviour? Is it at all possible to design training algorithms to build deep networks that are robust or is the instability to adversarial noise an inherent feature of all deep networks? Can we quantify the difference between random noise and adversarial noise? Providing theoretical answers to these questions is crucial in order to achieve the goal of building classifiers that are robust to adversarial hostile perturbations.

In this paper, we introduce a framework to formally study the robustness of classifiers to adversarial perturbations in the binary setting. We provide a general upper bound on the robustness of classifiers to adversarial perturbations, and then illustrate and specialize the obtained upper bound for the families of linear and quadratic classifiers. In both cases, our results show the existence of a fundamental limit on the robustness to adversarial perturbations. This limit is expressed in terms of a distinguishability measure between the classes, which depends on the considered family of classifiers. Specifically, for linear classifiers, the distinguishability is defined as the distance between the means of the two classes, while for quadratic classifiers, it is defined as the distance between the matrices of second order moments of the two classes. For both classes of functions, our upper bound on the robustness is valid for all classifiers in the family independently of the training procedure, and we see the fact that the bound is independent of the training procedure as a strength. This result has the following important implication: in difficult classification tasks involving a small value of distinguishability, any classifier in the set with low misclassification rate is vulnerable to adversarial perturbations. Importantly, the distinguishability parameter related to quadratic classifiers is much larger than that of linear classifiers for many datasets of interest, which suggests the existence of robust classifiers in flexible classification families, even for tasks where no linear robust and accurate classifiers exist (provably). We further compare the robustness to adversarial perturbations of linear classifiers to the more traditional notion of robustness to random uniform noise, where perturbation vectors are sampled uniformly at random from a sphere. The latter robustness is shown to be larger than the former by a factor of \(\sqrt{d}\) (with d the dimension of input signals), thereby showing that in high dimensional classification tasks, linear classifiers can be robust to random noise even for small values of the distinguishability. We illustrate the newly introduced concepts and our theoretical results on a running example used throughout the paper. We complement our theoretical analysis with experimental results, and show that the intuition obtained from the theoretical analysis also holds for more complex classifiers.

The phenomenon of adversarial instability has recently attracted a lot of attention from the deep network community. In Szegedy et al. (2014), the adversarial robustness of different classifiers is measured as the magnitude of the perturbation required to misclassify a data point. State-of-the-art classifiers are moreover shown to achieve small robustness. Several attempts have then been made to make deep networks robust to adversarial perturbations (Chalupka et al. 2014; Gu and Rigazio 2014; Bendale and Boult 2016), while more advanced techniques are proposed to defeat the classifiers (Carlini and Wagner 2016). Moreover, a distinct but related phenomenon has been explored in Nguyen et al. (2014). Closer to our work, the authors of Goodfellow et al. (2015) provided an empirical explanation of the phenomenon of adversarial instability, and designed an efficient method to find adversarial examples. Specifically, contrarily to the original explanation provided in Szegedy et al. (2014), the authors argue that it is the “linear” nature of deep nets that causes the adversarial instability. Instead, our paper adopts a rigorous mathematical perspective to the problem of adversarial instability and shows that adversarial instability is due to the low flexibility of classifiers compared to the difficulty of the classification task.

Our work should not be confused with works on the security of machine learning algorithms under adversarial attacks (Biggio et al. 2012; Barreno et al. 2006; Dalvi et al. 2004). These works specifically study attacks that manipulate the learning system (e.g., change the decision function by injecting malicious training points), as well as defense strategies to counter these attacks. This setting significantly differs from ours, as we examine the robustness of a fixed classifier to adversarial perturbations (that is, the classifier cannot be manipulated). The stability of learning algorithms has also been defined and extensively studied in Bousquet and Elisseeff (2002), Lugosi and Pawlak (1994). Again, this notion of stability differs from the one studied here, as we are interested in the robustness of fixed classifiers, and not of learning algorithms. The security of machine learning algorithms at test time have also been previously examined in different scenarios, in particular when the adversary has only limited knowledge about the classifier (Biggio et al. 2013; Dekel et al. 2010; Srndic and Laskov 2014). Unlike these papers that provide an empirical assessment (and improvement) of the robustness of classifiers to different types of attacks, the goal of our work is significantly different, as we show fundamental upper bounds on the robustness of classifiers, which cannot be violated by any learning algorithm.

The construction of learning algorithms that achieve robustness of classifiers to data corruption has been an active area of research in machine learning and robust optimization (see e.g., Caramanis et al. 2012 and references therein). For a specific disturbance model on the data samples, the robust optimization approach for constructing robust classifiers seeks to minimize the worst possible empirical error under such disturbances (Lanckriet et al. 2003; Xu et al. 2009). It is shown that, for many disturbance models, the desired objective function can be written as a tractable convex optimization problem. Our work studies the robustness of classifiers from a different perspective; we establish upper bounds on the robustness of classifiers independently of the learning algorithms. That is, using our bounds, we can certify the instability of a class of classifiers to adversarial perturbations, independently of the learning mechanism. In other words, while algorithmic and optimization aspects of robust classifiers have been studied in the above works, we focus on fundamental limits on the adversarial robustness of classifiers that are independent of the learning scheme.

The paper is structured as follows: Sect. 2 introduces the problem setting. In Sect. 3, we introduce a running example that is used throughout the paper. We introduce in Sect. 4 a theoretical framework for studying the robustness to adversarial perturbations. In the following two sections, two case studies are analyzed in detail. The robustness of linear classifiers (to adversarial and random noise) is studied in Sect. 5. In Sect. 6, we study the adversarial robustness of quadratic classifiers. Experimental results illustrating our theoretical analysis are given in Sect. 7. Proofs and additional discussion on the choice of the norms to measure perturbations are finally deferred to the “Appendix”.

2 Problem setting

We first introduce the framework and notations that are used for analyzing the robustness of classifiers to adversarial and uniform random noise. We restrict our analysis to the binary classification task, for simplicity. We expect similar conclusions for the multi-class case, but we leave that for future work. Let \(\mu \) denote the probability measure on \(\mathbb {R}^d\) of the data points that we wish to classify, and \(y(x) \in \{ -1, 1\}\) be the label of a point \(x \in \mathbb {R}^d\).Footnote 1 The distribution \(\mu \) is assumed to be of bounded support. That is, \(\mathbb {P}_{\mu } (x \in \mathcal {B}) = 1\), with \(\mathcal {B} = \{ x \in \mathbb {R}^d: \Vert x \Vert _2 \le M \}\) for some \(M > 0\). We further denote by \(\mu _{1}\) and \(\mu _{-1}\) the distributions of class 1 and class −1 in \(\mathbb {R}^d\), respectively. Let \(f: \mathbb {R}^d \rightarrow \mathbb {R}\) be an arbitrary classification function. The classification rule associated to f is simply obtained by taking the sign of f(x). The performance of a classifier f is usually measured by its risk, defined as the probability of misclassification according to \(\mu \):

$$\begin{aligned} R(f)&= \mathbb {P}_{\mu } (\text {sign} (f(x)) \ne y(x)) \\&= p_1 \mathbb {P}_{\mu _1} (f(x) < 0) + p_{-1} \mathbb {P}_{\mu _{-1}} (f(x) \ge 0), \end{aligned}$$

where \(p_{\pm 1} = \mathbb {P}_{\mu } (y(x) = \pm 1)\), and \(\text {sign} (a) = 1\) if \(a \ge 0\), \(\text {sign} (a) = -1\) if \(a < 0\).

The focus of this paper is to study the robustness of classifiers to adversarial perturbations in the input space \(\mathbb {R}^d\). Given a datapoint \(x \in \mathbb {R}^d\) sampled from \(\mu \), we denote by \(\varDelta _{\text {adv}} (x;f)\) the norm of the smallest perturbation that switches the signFootnote 2 of f:

$$\begin{aligned} \varDelta _{\text {adv}} (x; f) = \min _{r \in \mathbb {R}^d} \Vert r \Vert _2 \text { subject to } f(x) f(x+r) \le 0. \end{aligned}$$
(1)

Here, we use the \(\ell _2\) norm to quantify the perturbation; we refer the reader to “Appendix C” for a discussion of the norm choice. Unlike random noise, the above definition corresponds to a minimal noise, where the perturbation r is sought to flip the estimated label of x. In other words, it corresponds to the minimal distance from x to the decision boundary of the classifier \(\{x: f(x) = 0\}\). It is important to note that, while x is a datapoint sampled according to \(\mu \), the perturbed point \(x+r\) is not required to belong to the dataset (i.e., \(x+r\) can be outside the support of \(\mu \)). The robustness to adversarial perturbation of f is defined as the average of \(\varDelta _{\text {adv}} (x;f)\) over all x:

$$\begin{aligned} \rho _{\text {adv}}(f) = \mathbb {E}_\mu (\varDelta _{\text {adv}}(x; f)). \end{aligned}$$
(2)

In words, \(\rho _{\text {adv}} (f)\) is defined as the average norm of the minimal perturbations required to flip the estimated labels of the datapoints. Note that \(\rho _{\text {adv}} (f)\) is a property of both the classifier f and the distribution \(\mu \), but it is independent of the true labels of the datapoints y.Footnote 3 Moreover, it should be noted that \(\rho _{\text {adv}}\) is different from the margin considered by SVMs. In fact, SVM margins are traditionally defined as the minimal distance to the (linear) boundary over all training points, while \(\rho _{\text {adv}}\) is defined as the average distance to the boundary over all training points. In addition, distances in our case are measured in the input space, while the margin is defined in the feature space for kernel SVMs.

Fig. 1
figure 1

Illustration of \(\varDelta _{\text {adv}} (x;f)\) and \(\varDelta _{\text {unif}, \epsilon } (x;f)\). The red line represents the classifier boundary. In this case, the quantity \(\varDelta _{\text {adv}} (x;f)\) is equal to the distance from x to this line. The radius of the sphere drawn around x is \(\varDelta _{\text {unif}, \epsilon } (x;f)\). Assuming \(f(x) > 0\), observe that the spherical cap in the region below the line has measure \(\epsilon \), which means that the probability that a random point sampled on the sphere has label \(+1\) is \(1-\epsilon \) (Color figure online)

In this paper, we also study the robustness of classifiers to random uniform noise, that we define as follows. For a given \(\epsilon \in [0,1]\), let

$$\begin{aligned} \varDelta _{\text {unif}, \epsilon }(x;f) =&\min _{\eta \ge 0} \eta \\&\text { s.t. } \mathbb {P}_{n \sim \eta \mathbb {S}} (f(x) f(x+n) \le 0) \ge \epsilon , \nonumber \end{aligned}$$
(3)

where \(\eta \mathbb {S}\) denotes the uniform measure on the sphere centered at 0 and of radius \(\eta \) in \(\mathbb {R}^d\). In words, \(\varDelta _{\text {unif}, \epsilon }(x;f)\) denotes the minimal radius of the sphere centered at x, such that perturbed points sampled uniformly at random from this sphere are misclassified with probability larger than \(\epsilon \). An illustration of \(\varDelta _{\text {unif}, \epsilon } (x;f)\) and \(\varDelta _{\text {adv}} (x;f)\) is given in Fig. 1. Similarly to adversarial perturbations, the point \(x+n\) can lie outside the support of \(\mu \), in general. Note moreover that \(\varDelta _{\text {unif}, \epsilon } (x;f)\) provides an upper bound on \(\varDelta _{\text {adv}} (x;f)\), for all \(\epsilon \). The \(\epsilon \)-robustness of f to random uniform noise is defined by:

$$\begin{aligned} \rho _{\text {unif}, \epsilon } (f) = \mathbb {E}_{\mu } (\varDelta _{\text {unif}, \epsilon } (x; f)). \end{aligned}$$
(4)

We summarize the quantities of interest in Table 1.

Table 1 Quantities of interest in the paper and their dependencies

3 Running example

We introduce in this section a running example used throughout the paper to illustrate the notion of adversarial robustness, and highlight its difference with the notion of risk. We consider a binary classification task on square images of size \(\sqrt{d} \times \sqrt{d}\). Images of class 1 (resp. class −1) contain exactly one vertical line (resp. horizontal line), and a small constant positive number a (resp. negative number \({-}a\)) is added to all the pixels of the images. That is, for class 1 (resp. −1) images, background pixels are set to a (resp. \({-}a\)), and pixels belonging to the line are equal to \(1+a\) (resp. \(1-a\)). Figure 2 illustrates the classification problem for \(d = 25\). The number of datapoints to classify is \(N = 2 \sqrt{d}\).

Clearly, the most relevant concept (in terms of visual appearance) that permits to separate the two classes is the orientation of the line (i.e., horizontal vs. vertical). The bias of the image (i.e., the sum of all its pixels) is also a valid concept for this task, as it separates the two classes, despite being much more difficult to detect visually. The class of an image can therefore be correctly estimated from its orientation or from the bias. Let us first consider the linear classifier defined by

$$\begin{aligned} f_{\text {lin}} (x) = \frac{1}{\sqrt{d}} \mathbf {1}^T x - 1, \end{aligned}$$
(5)

where \(\mathbf {1}\) is the column vector of size d whose entries are all equal to 1, and x is the vectorized image. This classifier exploits the difference of bias between the two classes and achieves a perfect classification accuracy for all \(a > 0\). Indeed, a simple computation gives \(f_{\text {lin}} (x) = \sqrt{d} a\) (resp. \(f_{\text {lin}} (x) = -\sqrt{d} a\)) for class 1 (resp. class −1) images. Therefore, the risk of \(f_{\text {lin}}\) is \(R(f_{\text {lin}}) = 0\). It is important to note that \(f_{\text {lin}}\) only achieves zero risk because it captures the bias, but fails to distinguish between the images based on the orientation of the line. Indeed, when \(a = 0\), the datapoints are not linearly separable.Footnote 4 Despite its perfect accuracy for any \(a > 0\), \(f_{\text {lin}}\) is not robust to small adversarial perturbations when a is small, as a minor perturbation of the bias a switches the estimated label. Indeed, a simple computation gives \(\rho _{\text {adv}} (f_{\text {lin}}) = \sqrt{d} a\); therefore, the adversarial robustness of \(f_{\text {lin}}\) can be made arbitrarily small by choosing a to be small enough. More than that, among all linear classifiers that satisfy \(R(f) = 0\), \(f_{\text {lin}}\) is the one that maximizes \(\rho _{\text {adv}} (f)\) (as we show later in Sect. 5). Therefore, all zero-risk linear classifiers are non-robust to adversarial perturbations, for this classification task.

Unlike linear classifiers, a more flexible classifier that correctly captures the orientation of the lines in the images will be robust to adversarial perturbation, unless this perturbation significantly alters the image and modifies the direction of the line. To illustrate this point, we compare the adversarial robustness of \(f_{\text {lin}}\) to that of a second order polynomial classifier \(f_{\text {quad}}\) that achieves zero risk in Fig. 3, for \(d = 4\).Footnote 5 While a hardly perceptible change of the image is sufficient to switch the estimated label for the linear classifier, the minimal perturbation for \(f_{\text {quad}}\) is one that modifies the direction of the line, to a great extent.

Fig. 2
figure 2

ae: Class 1 images. fj: class-1 images

Fig. 3
figure 3

Robustness to adversarial noise of linear and quadratic classifiers. a Original image (\(d = 4\), and \(a = 0.1/\sqrt{d}\)), b, c minimally perturbed image that switches the estimated label of b \(f_{\text {lin}}\), c \(f_{\text {quad}}\). Note that the difference between b and a is hardly perceptible, this demonstrates that \(f_{\text {lin}}\) is not robust to adversarial noise. On the other hand images c and a are clearly different, which indicates that \(f_{\text {quad}}\) is more robust to adversarial noise. a Original, b \(f_\mathrm{lin}\), c \(f_{\mathrm{quad}}\)

The above example highlights several important facts, which are summarized as follows:

  • Risk and adversarial robustness are two distinct properties of a classifier. While \(R(f_{\text {lin}}) = 0\), \(f_{\text {lin}}\) is definitely not robust to small adversarial perturbations.Footnote 6 This is due to the fact that \(f_{\text {lin}}\) only captures the bias in the images and ignores the orientation of the line.

  • To capture orientation (i.e., the most visual concept), one has to use a classifier that is flexible enough for the task. Unlike the class of linear classifiers, the class of polynomial classifiers of degree 2 correctly captures the line orientation, for \(d = 4\).

  • The robustness to adversarial perturbations provides a quantitative measure of the strength of a concept. Since \(\rho _{\text {adv}}(f_{\text {lin}}) \ll \rho _{\text {adv}}(f_{\text {quad}})\), one can confidently say that the concept captured by \(f_{\text {quad}}\) is stronger than that of \(f_{\text {lin}}\), in the sense that the essence of the classification task is captured by \(f_{\text {quad}}\), but not by \(f_{\text {lin}}\) (while they are equal in terms of misclassification rate). In general classification problems, the quantity \(\rho _{\text {adv}} (f)\) provides a natural way to evaluate and compare the learned concept; larger values of \(\rho _{\text {adv}} (f)\) indicate that stronger concepts are learned, for comparable values of the risk.

As illustrated in the above toy example, the robustness to adversarial perturbations is key to assess the strength of a concept. In real-world classification tasks, weak concepts correspond to partial information about the classification task (which are possibly sufficient to achieve a good accuracy), while strong concepts capture the essence of the classification task.

In the next sections, our goal is to quantify how large can the robustness to adversarial perturbations be, for fixed classification families (e.g., family of linear classifiers). To do so, we establish upper bounds on the adversarial robustness \(\rho _{\text {adv}} (f)\) in terms of the classifier risk R(f) for all classifiers in the family. These learning-independent limits show that it is not possible to achieve a large robustness jointly with a small risk for many classification tasks of interest, independently of the training algorithm used to choose f.

4 Upper limit on the adversarial robustness

We now introduce our theoretical framework for analyzing the robustness to adversarial perturbations. We first present a key assumption on the classifier f for the analysis of adversarial robustness.

Assumption (A). There exist \(\tau > 0\) and \(0 < \gamma \le 1\) such that, for all \(x \in \mathcal {B}\),

$$\begin{aligned} \begin{aligned} \text {dist} (x, S_{-})&\le \tau \max (0, f(x))^\gamma , \\ \text {dist} (x, S_{+})&\le \tau \max (0, -f(x))^\gamma , \end{aligned} \end{aligned}$$
(6)

where \({{\mathrm{dist}}}(x, S) = \min _{y} \{ \Vert x - y \Vert _2: y \in S \}\) and \(S_+\) (resp. \(S_{-}\)) is the set of points x such that \(f(x) \ge 0\) (resp. \(f(x) \le 0\)):

$$\begin{aligned} S_+&= \{ x: f(x) \ge 0\}, \\ S_-&= \{ x: f(x) \le 0 \}. \end{aligned}$$

In words, the assumption (A) states that for any datapoint x, the residual \(\max (0, f(x))\) (resp. \(\max (0, -f(x))\)) can be used to bound the distance from x to a datapoint y classified −1 (resp. 1), or that is exactly on the decision boundary of the classifier (i.e., \(f(y) = 0\)).

Bounds of the form Eq. (6) have been established for various classes of functions since the early of work of Łojasiewicz (1961) in algebraic geometry and have found applications in areas such as mathematical optimization (Pang 1997; Lewis and Pang 1998). For example, Łojasiewicz (1961) and later Luo and Pang (1994) have shown that, quite remarkably, assumption (A) holds for the general class of analytic functions.Footnote 7 In Ng and Zheng (2003), (A) is shown to hold with \(\gamma =1\) for piecewise linear functions. In Luo and Luo (1994), error bounds on polynomial systems are studied. Proving inequality (6) with explicit constants \(\tau \) and \(\gamma \) for different classes of functions is still an active area of research (Li et al. 2014). In Sects. 5 and 6, we provide examples of function classes for which (A) holds, and explicit formulas for the parameters \(\tau \) and \(\gamma \).

The following result establishes a general upper bound on the robustness to adversarial perturbations:

Lemma 1

Let f be an arbitrary classifier that satisfies (A) with parameters \((\tau , \gamma )\). Then,

$$\begin{aligned} \rho _{\text {adv}} (f) \le 4^{1-\gamma } \tau \left( p_1 \mathbb {E}_{\mu _1} (f(x)) - p_{-1} \mathbb {E}_{\mu _{-1}} (f(x)) + 2 \Vert f \Vert _{\infty } R(f) \right) ^{\gamma }. \end{aligned}$$

The proof can be found in “Appendix A.1”. The above result provides an upper bound on the adversarial robustness that depends on the risk of the classifier, as well as a weighted difference between the expectations of the classifier values computed on distribution \(\mu _1\) and \(\mu _{-1}\). This result is general, as we only assume that f satisfies assumption (A). In the next two sections, we apply Lemma 1 to two classes of classifiers, and derive interpretable upper bounds in terms of a distinguishibality measure (that depends only on the dataset) which quantifies the notion of difficulty of a classification task. Studying the general result in Lemma 1 through two practical classes of classifiers shows the implications of such a fundamental limit on the adversarial robustness, and illustrates the methodology for deriving class-specific and practical upper bounds on adversarial robustness from the general upper bound.

5 Robustness of linear classifiers to adversarial and random perturbations

The goal of this section is twofold; first, we specialize Lemma 1 to the class of linear functions, and derive interpretable upper bounds on the robustness of classifiers to adversarial perturbations (Sect. 5.1). Then, we derive a formal relation between the robustness of linear classifiers to adversarial perturbations, and the robustness to random uniform noise (Sect. 5.2).

5.1 Adversarial perturbations

We define the classification function \(f(x) = w^T x + b\). Note that any linear classifier for which \(|b| > M \Vert w \Vert _2\) is a trivial classifier that assigns the same label to all points, where we recall that M is defined such that \(\mathbb {P}_{\mu } (\Vert x \Vert _2 \le M) = 1\). We therefore assume that \(|b| \le M \Vert w \Vert _2\).

We first show that the family of linear classifiers satisfies assumption (A), with explicit parameters \(\tau \) and \(\gamma \).

Lemma 2

Assumption (A) holds for linear classifiers \(f(x) = w^T x + b\) with \(\tau = 1/\Vert w \Vert _2\) and \(\gamma = 1\).

Proof

Let x be such that \(f(x) \ge 0\), and the goal is to prove that \(\text {dist}(x, S_{-}) \le \tau f(x)^{\gamma }\) (the other inequality can be handled in a similar way). We have \(f(x) = w^T x + b\). Observe that \(\text {dist}(x, S_-) = \min _{z} \{ \Vert x - z \Vert _2: w^T z + b \le 0 \}\), which corresponds to the distance between x and its projection onto the affine plane \(\{z: w^T z + b = 0\}\). Hence, \(\text {dist}(x, S_-) = f(x) / \Vert w \Vert _2 \implies \tau = 1/\Vert w\Vert _2, \gamma =1\). \(\square \)

Using Lemma 1, we now derive an interpretable upper bound on the robustness to adversarial perturbations. In particular, the following theorem bounds \(\rho _{\text {adv}} (f)\) from above in terms of the first moments of the distributions \(\mu _1\) and \(\mu _{-1}\), and the classifier’s risk:

Theorem 1

Let \(f(x) = w^T x + b\) such that \(|b| \le M \Vert w \Vert _2\). Then,

$$\begin{aligned} \rho _{\text {adv}} (f) \le \Vert p_1 \mathbb {E}_{\mu _1} (x) - p_{-1} \mathbb {E}_{\mu _{-1}} (x) \Vert _2 + M (|p_1 - p_{-1}| + 4 R(f)). \end{aligned}$$
(7)

In the balanced setting where \(p_{1} = p_{-1} = 1/2\), and if the intercept \(b = 0\) the following inequality holds:

$$\begin{aligned} \rho _{\text {adv}} (f) \le \frac{1}{2} \Vert \mathbb {E}_{\mu _1} (x) - \mathbb {E}_{\mu _{-1}} (x) \Vert _2 + 2 M R(f). \end{aligned}$$
(8)

Proof

Using Lemma 1 with \(\tau = 1/\Vert w\Vert _2\) and \(\gamma = 1\), we have

$$\begin{aligned} \rho _{\text {adv}} (f) \le \frac{1}{\Vert w \Vert _2} \left( w^T \left( p_1 \mathbb {E}_{\mu _1} (x) - p_{-1} \mathbb {E}_{\mu _{-1}} (x) \right) + b (p_1 - p_{-1}) + 2 \Vert f \Vert _{\infty } R(f) \right) \end{aligned}$$
(9)

Observe that

  1. i.

    \(w^T \left( p_1 \mathbb {E}_{\mu _1} (x) - p_{-1} \mathbb {E}_{\mu _{-1}} (x) \right) \le \Vert w \Vert _2 \Vert p_1 \mathbb {E}_{\mu _1} (x) - p_{-1} \mathbb {E}_{\mu _{-1}} (x) \Vert _2\) using Cauchy-Schwarz inequality.

  2. ii.

    \(b (p_1 - p_{-1}) \le M \Vert w \Vert _2 |p_{1} - p_{-1}|\) using the assumption \(|b| \le M \Vert w\Vert _2\),

  3. iii.

    \(\Vert f \Vert _{\infty } = \max _{x: \Vert x \Vert _2 \le M} \{ |w^T x + b| \} \le 2 M \Vert w \Vert _2\).

By plugging the three inequalities in Eq. (9), we obtain the desired result in Eq. (7).

When \(p_1 = p_{-1} = 1/2\), and the intercept \(b=0\), inequality (iii) can be tightened to \(\Vert f \Vert _{\infty } \le M \Vert w \Vert _2\), and directly leads to the stated result Eq. (8). \(\square \)

Our upper bound on \(\rho _{\text {adv}} (f)\) depends on the difference of means \(\Vert \mathbb {E}_{\mu _1} (x) - \mathbb {E}_{\mu _{-1}} (x) \Vert _2\), which measures the distinguishability between the classes. Note that this term is classifier-independent, and is only a property of the classification task. The only dependence on f in the upper bound is through the risk R(f). Thus, in classification tasks where the means of the two distributions are close (i.e., \(\Vert \mathbb {E}_{\mu _1} (x) - \mathbb {E}_{\mu _{-1}} (x) \Vert _2\) is small), any linear classifier with small risk will necessarily have a small robustness to adversarial perturbations. Note that the upper bound logically increases with the risk, as there clearly exist robust linear classifiers that achieve high risk (e.g., constant classifier). Figure 4a pictorially represents the \(\rho _{\text {adv}}\) versus R diagram as predicted by Theorem 1. Each linear classifier is represented by a point on the \(\rho _{\text {adv}}\)R tradeoff diagram, and our result shows the existence of a region that linear classifiers cannot attain.

Quite importantly, in many interesting classification problems, the quantity \(\Vert \mathbb {E}_{\mu _1} (x) - \mathbb {E}_{\mu _{-1}} (x) \Vert _2\) is small due to large intra-class variability (e.g., due to complex intra-class geometric transformations in computer vision applications). Therefore, even if a linear classifier can achieve a good classification performance on such a task, it will not be robust to small adversarial perturbations. In simple tasks involving distributions with significantly different averages, it is likely that there exists a linear classifier that can separate correctly the classes, and have a large robustness to adversarial perturbations.

Fig. 4
figure 4

Adversarial robustness \(\rho _{\text {adv}}\) versus risk diagram for linear classifiers. Each point in the plane represents a linear classifier f. a Illustrative diagram, with the non-achievable zone (Theorem 1). b The exact \(\rho _{\text {adv}}\) versus risk achievable curve, and our upper bound estimate on the running example (details in Sect. 5.3)

5.2 Random uniform noise

We now examine the robustness of linear classifiers to random uniform noise. The following theorem compares the robustness of linear classifiers to random uniform noise with their robustness to adversarial perturbations.

Theorem 2

Let \(f(x) = w^T x + b\). For any \(\epsilon \in [0, 1/12)\), we have the following bounds on \(\rho _{\text {unif}, \epsilon } (f)\):

$$\begin{aligned} \rho _{\text {unif}, \epsilon } (f)&\ge \max \left( C_1(\epsilon ) \sqrt{d}, 1\right) \rho _{\text {adv}} (f), \end{aligned}$$
(10)
$$\begin{aligned} \rho _{\text {unif}, \epsilon } (f)&\le \widetilde{C_2}(\epsilon ,d) \rho _{\text {adv}} (f) \le C_2(\epsilon ) \sqrt{d} \rho _{\text {adv}} (f) , \end{aligned}$$
(11)

with \(C_1(\epsilon ) = (2 \ln (2/\epsilon ))^{-1/2}\), \(\widetilde{C_2}(\epsilon ,d) = (1 - (12 \epsilon )^{1/d})^{-1/2}\) and \(C_2(\epsilon ) = (1 - 12 \epsilon )^{-1/2}\).

The proof can be found in “Appendix A.2”. In words, \(\rho _{\text {unif}, \epsilon } (f)\) behaves as \(\sqrt{d} \rho _{\text {adv}} (f)\) for linear classifiers (for constant \(\epsilon \)). Linear classifiers are therefore more robust to random noise than adversarial perturbations, by a factor of \(\sqrt{d}\). In typical high dimensional classification problems, this shows that a linear classifier can be robust to random noise even if \(\Vert \mathbb {E}_{\mu _1} (x) - \mathbb {E}_{\mu _{-1}} (x) \Vert _2\) is small. Note moreover that our result is tight for \(\epsilon = 0\), as we get \(\rho _{\text {unif}, 0} (f)= \rho _{\text {adv}} (f)\).

Our results can be put in perspective with the empirical results of Szegedy et al. (2014), that showed a large gap between the two notions of robustness on neural networks. Our analysis provides a confirmation of this high dimensional phenomenon on linear classifiers.

5.3 Illustration of the results on the running example

We now illustrate our theoretical results on the example of Sect. 3. In this case, we have \(\Vert \mathbb {E}_{\mu _1} (x) - \mathbb {E}_{\mu _{-1}} (x) \Vert _2 = 2 \sqrt{d} a\). By using Theorem 1, any zero-risk linear classifier satisfies \(\rho _{\text {adv}} (f) \le \sqrt{d} a\). As we choose \(a \ll 1/\sqrt{d}\), accurate linear classifiers are therefore not robust to adversarial perturbations for this task. We note that \(f_{\text {lin}}\) [defined in Eq. (5)] achieves the upper bound and is therefore the most robust accurate linear classifier one can get, as it can easily be checked that \(\rho _{\text {adv}} (f_{\text {lin}}) = \sqrt{d} a\). In Fig. 4b the exact \(\rho _{\text {adv}}\) versus R curve is compared to our theoretical upper bound,Footnote 8 for \(d = 25\), \(N = 10\) and a bias \(a = 0.1/\sqrt{d}\). Besides the zero-risk case where our upper bound is tight, the upper bound is reasonably close to the exact curve for other values of the risk (despite not being tight).

Fig. 5
figure 5

Adversarial robustness and robustness to random uniform noise of \(f_{\text {lin}}\) versus the dimension d. We used \(\epsilon = 0.01\), and \(a = 0.1/\sqrt{d}\). The lower bound is given in Eq. (10), and the upper bound is the first inequality in Eq. (11)

We now focus on the robustness to uniform random noise of \(f_{\text {lin}}\). For various values of d, we compute the upper and lower bounds on the robustness to random uniform noise (Theorem 2) of \(f_{\text {lin}}\), where we fix \(\epsilon \) to 0.01. In addition, we compute a simple empirical estimate \(\widehat{\rho }_{\text {unif}, \epsilon }\) of the robustness to random uniform noise of \(f_{\text {lin}}\) (see Sect. 7 for details on the computation of this estimate). The results are illustrated in Fig. 5. While the adversarial noise robustness is constant with the dimension (equal to 0.1, as \(\rho _{\text {adv}} (f_{\text {lin}}) = \sqrt{d} a\) and \(a = 0.1/\sqrt{d}\)), the robustness to random uniform noise increases with d. For example, for \(d = 2500\), the value of \(\rho _{\text {unif}, \epsilon }\) is at least 15 times larger than the adversarial robustness \(\rho _{\text {adv}}\). In high dimensions, a linear classifier is therefore much more robust to random uniform noise than adversarial noise.

6 Adversarial robustness of quadratic classifiers

In this section, we derive specialized upper bounds on the robustness to adversarial perturbations of quadratic classifers using Lemma 1.

6.1 Analysis of adversarial perturbations

We study the robustness to adversarial perturbations of quadratic classifiers of the form \(f(x) = x^T A x\), where A is a symmetric matrix. Besides the practical use of quadratic classifiers in some applications (Goldberg and Elhadad 2008; Chang et al. 2010), they represent a natural extension of linear classifiers. The study of linear versus quadratic classifiers provides insights into how adversarial robustness depends on the family of considered classifiers. Similarly to the linear setting, we exclude the case where f is a trivial classifier that assigns a constant label to all datapoints. That is, we assume that A satisfies

$$\begin{aligned} \lambda _{\min } (A) < 0, \quad \lambda _{\max } (A) > 0, \end{aligned}$$
(12)

where \(\lambda _{\min } (A)\) and \(\lambda _{\max } (A)\) are the smallest and largest eigenvalues of A. We moreover impose that the eigenvalues of A satisfy

$$\begin{aligned} \max \left( \left| \frac{\lambda _{\min } (A)}{\lambda _{\max } (A)} \right| , \left| \frac{\lambda _{\max } (A)}{\lambda _{\min } (A)} \right| \right) \le K, \end{aligned}$$
(13)

for some constant value \(K \ge 1\). This assumption imposes an approximate symmetry around 0 of the extremal eigenvalues of A, thereby disallowing a large bias towards any of the two classes.

We first show that the assumption (A) is satisfied for quadratic classifiers, and derive explicit formulas for \(\tau \) and \(\gamma \).

Lemma 3

Assumption (A) holds for the class of quadratic classifiers \(f(x) = x^T A x\) where \(\lambda _{\min } (A) < 0\), \(\lambda _{\max } (A) > 0\) with \(\tau = \max (|\lambda _{\min } (A)|^{-1/2}, |\lambda _{\max } (A)|^{-1/2})\), and \(\gamma = 1/2\),

Proof

Let x be such that \(f(x) \ge 0\), and the goal is to prove that \(\text {dist}(x, S_{-}) \le \tau f(x)^{\gamma }\) (the other inequality can be handled in a similar way). Assume without loss of generality that A is diagonal (this can be done using an appropriate change of basis). Let \(\nu = -\lambda _{\min } (A)\). We have \(f(x) = \sum _{i=1}^{d-1} \lambda _i x_i^2 - \nu x_d^2\). By setting \(r_i = 0\) for all \(i \in \{1, \ldots , d-1\}\) and \(r_d = \text {sign} (x_d) \sqrt{f(x)/\nu }\), (where \(\text {sign}(x) = 1\) if \(x \ge 0\) and −1 otherwise) we have

$$\begin{aligned} f(x+r)&= \sum _{i=1}^{d-1} \lambda _i x_i^2 - \nu (x_d + \text {sgn} (x_d) \sqrt{f(x) / \nu } )^2 \\&= f(x) - 2 \nu x_d \text {sgn} (x_d) \sqrt{f(x) / \nu } - f(x) \\&= - 2 \nu | x_d | \sqrt{f(x) / \nu } \le 0. \end{aligned}$$

Hence, \(\text {dist} (x, S_{-}) \le \Vert r \Vert _2 = \nu ^{-1/2} \sqrt{f(x)} \implies \tau = \nu ^{-1/2}, \gamma =1/2\). \(\square \)

The following result builds on Lemma 1 and bounds the adversarial robustness of quadratic classifiers as a function of the second order moments of the distribution and the risk.

Theorem 3

Let \(f(x) = x^T A x\), where A satisfies Eqs. (12) and (13). Then,

$$\begin{aligned} \rho _{\text {adv}} (f) \le 2 \sqrt{K \Vert p_1 C_1 - p_{-1} C_{-1} \Vert _{*} + 2 M K R(f)}, \end{aligned}$$

where \(C_{\pm 1} (i,j) = (\mathbb {E}_{\mu _{\pm 1}} (x_i x_j))_{1 \le i,j \le d}\), and \(\Vert \cdot \Vert _{*}\) denotes the nuclear norm defined as the sum of the singular values of the matrix.

Proof

The class of classifiers under study satisfies assumption (A) with \(\tau = \max (|\lambda _{\min } (A)|^{-1/2}, |\lambda _{\max } (A)|^{-1/2})\), and \(\gamma = 1/2\) (see Lemma 3). By applying Lemma 1, we have

$$\begin{aligned} \rho _{\text {adv}} (f) \le 2 \tau \left( p_1 \mathbb {E}_{\mu _1} (x^T A x) - p_{-1} \mathbb {E}_{\mu _{-1}} (x^T A x) + 2 \Vert f \Vert _{\infty } R(f) \right) ^{1/2}. \end{aligned}$$

We then use three inequalities:

  1. i.

    Note first that

    $$\begin{aligned} p_1 \mathbb {E}_{\mu _1} (x^T A x) - p_{-1} \mathbb {E}_{\mu _{-1}} (x^T A x)&= \sum _{i,j} a_{i,j} p_1 \mathbb {E}_{\mu _1} (x_i x_j) - \sum _{i,j} a_{i,j} p_{-1} \mathbb {E}_{\mu _{-1}} (x_i x_j) \\&= p_1 \text {Trace} (A^T C_1) - p_{-1} \text {Trace} (A^T C_{-1}) \\&= \text {Trace} (A^T (p_1 C_1 - p_{-1} C_{-1})) \\&= \langle A, p_1 C_1 - p_{-1} C_{-1} \rangle , \end{aligned}$$

    where we have used the canonical inner product for matrices \(\langle Y, Z \rangle = \text {Trace} (Y^T Z)\). Using Holder’s inequality for matrices (Bhatia 2013), we have \(\langle A, p_1 C_1 - p_{-1} C_{-1} \rangle \le \Vert A \Vert \Vert p_1 C_1 - p_{-1} C_{-1} \Vert _{*}\), where \(\Vert \cdot \Vert \) and \(\Vert \cdot \Vert _{*}\) denote respectively the spectral and nuclear matrix norms.

  2. ii.

    \(|f(x)| = |x^T A x| \le \Vert A \Vert \Vert x \Vert \le \Vert A \Vert M\),

  3. iii.

    \(\Vert A \Vert ^{1/2} \tau = \max (|\lambda _{\min }(A)|, |\lambda _{\max }(A)|)^{1/2} \max (|\lambda _{\min } (A)|^{-1/2}, |\lambda _{\max } (A)|^{-1/2}) \le \sqrt{K}\).

Applying these three inequalities, we obtain

$$\begin{aligned} \rho _{\text {adv}} (f)&\le 2 \Vert A \Vert ^{1/2} \tau \left( \Vert p_1 C_1 - p_{-1} C_{-1} \Vert _{*} + 2 M R(f) \right) ^{1/2} \\&\le 2 \sqrt{K} \left( \Vert p_1 C_1 - p_{-1} C_{-1} \Vert _{*} + 2 M R(f) \right) ^{1/2}. \end{aligned}$$

\(\square \)

In words, the upper bound on the adversarial robustness depends on a distinguishability measure, defined by \(\Vert C_1 - C_{-1} \Vert _{*}\), and the classifier’s risk. In difficult classification tasks, where \(\Vert C_1 - C_{-1} \Vert _{*}\) is small, all quadratic classifiers with low risk that satisfy our assumptions in Eqs. (12, 13) are non-robust to adversarial perturbations.

It should be noted that, while the distinguishability is measured with the distance between the means of the two distributions in the linear case, it is defined here as the difference between the second order moments matrices \(\Vert C_1 - C_{-1} \Vert _{*}\). Therefore, in classification tasks involving two distributions with close means, and different second order moments, any zero-risk linear classifier will not be robust to adversarial noise, while zero-risk and robust quadratic classifiers are a priori possible according to our upper bound in Theorem 3. This suggests that robustness to adversarial perturbations can be larger for more flexible classifiers, for comparable values of the risk.

Finally, it is important to emphasize that the above result does not show that any linear classifier is always less robust than any quadratic classifier, for a fixed problem. In contrast, we show that for a fixed problem, the upper bound on \(\rho _{\text {adv}} (f)\) obtained for the family of linear classifiers is usually much smaller than that of quadratic classifiers (for similar accuracy). This therefore suggests that, while for many problems of interest, it is not possible to find robust (and accurate) linear classifiers, we can find higher-order classifiers that achieve large robustness (and accuracy).

6.2 Illustration of the results on the running example

We now illustrate our results on the running example of Sect. 3, with \(d = 4\). In this case, a simple computation gives \(\Vert C_1 - C_{-1} \Vert _{*} = 2 + 8a \ge 2\). This term is significantly larger than the difference of means (equal to 4a), and there is therefore hope to have a quadratic classifier that is accurate and robust to small adversarial perturbations, according to Theorem 3. In fact, the following quadratic classifier

$$\begin{aligned} f_{\text {quad}} (x) = x_1 x_2 + x_3 x_4 - x_1 x_3 - x_2 x_4, \end{aligned}$$

outputs 1 for vertical images, and −1 for horizontal images (independently of the bias a). Therefore, \(f_{\text {quad}}\) achieves zero risk on this classification task, similarly to \(f_{\text {lin}}\). The two classifiers however have different robustness properties to adversarial perturbations. Using straightforward calculations, it can be shown that \(\rho _{\text {adv}} (f_{\text {quad}}) = 1/\sqrt{2}\), for any value of a (see “Appendix B” for more details). For small values of a, we therefore get \(\rho _{\text {adv}} (f_{\text {lin}}) \ll \rho _{\text {adv}} (f_{\text {quad}})\). This result is intuitive, as \(f_{\text {quad}}\) differentiates the images from their orientation, unlike \(f_{\text {lin}}\) that uses the bias to distinguish them. The minimal perturbation required to switch the estimated label of \(f_{\text {quad}}\) is therefore one that modifies the direction of the line, while a hardly perceptible perturbation that modifies the bias is enough to flip the label for \(f_{\text {lin}}\). This explains the result originally illustrated in Fig. 3.

7 Experimental results

7.1 Setting

In this section, we illustrate our results on practical classification examples. Specifically, through experiments on real data, we seek to confirm the identified limit on the robustness of classifiers, and we show the large gap between adversarial and random robustness on real data. We also study more general classifiers to suggest that the trends obtained with our theoretical results are not limited to linear and quadratic classifiers.

Given a binary classifier f, and a datapoint x, we use an approach close to that of Szegedy et al. (2014) to approximate \(\varDelta _{\text {adv}} (x;f)\). Specifically, we perform a line search to find the maximum \(c > 0\) for which the minimizer of the following problem satisfies \(f(x) f(x+r) \le ~0\):

$$\begin{aligned} \min _{r} c \Vert r \Vert _2 + L(f(x+r) \text {sign} (f(x))), \end{aligned}$$

where we set \(L(x) = \max (0, x)\). The first term in the above optimization problem favors perturbation vectors with small norm, \(\Vert r \Vert _2\), while the second term favors perturbations that lead to a misclassification; the parameter c controls the tradeoff between these goals. The above problem (for c fixed) is solved with a subgradient procedure, and we denote by \(\widehat{\varDelta }_{\text {adv}} (x;f)\) the obtained solution.Footnote 9 The empirical robustness to adversarial perturbations is then defined by \(\widehat{\rho }_{\text {adv}} (f) = \frac{1}{m} \sum _{i=1}^m \widehat{\varDelta }_{\text {adv}} (x_i;f)\), where \(x_1, \ldots , x_m\) denote the training points. To evaluate the robustness of f, we compare \(\widehat{\rho }_{\text {adv}} (f)\) to the following quantity:

$$\begin{aligned} \kappa = \frac{1}{m} \sum _{i=1}^m \min _{j: y(x_j) \ne y(x_i)} \Vert x_i - x_j \Vert _2. \end{aligned}$$
(14)

It represents the average norm of the minimal perturbation required to “transform” a training point to a training point of the opposite class, and can be seen as a distance measure between the two classes. The quantity \(\kappa \) therefore provides a baseline for comparing the robustness to adversarial perturbations, and we say that f is not robust to adversarial perturbations when \(\widehat{\rho }_{\text {adv}} (f) \ll \kappa \). We also compare the adversarial robustness of the classifiers with their robustness to random uniform noise. We estimate \(\varDelta _{\text {unif}, \epsilon } (x;f)\) using a line search procedure that finds the largest \(\eta \) for which the condition

$$\begin{aligned} \frac{1}{J} \# \{1 \le j \le J: f(x+n_j) f(x) \le 0 \} \le \epsilon , \end{aligned}$$

is satisfied, where \(n_1, \ldots , n_J\) are iid samples from the sphere \(\eta \mathbb {S}\). By calling this estimate \(\widehat{\varDelta }_{\text {unif}, \epsilon } (x;f)\), the robustness of f to uniform random noise is the empirical average over all training points, i.e., \(\widehat{\rho }_{\text {unif}, \epsilon } (f) = \frac{1}{m} \sum _{i=1}^m \widehat{\varDelta }_{\text {unif}, \epsilon } (x_i;f)\). In the experiments, we set \(J = 500\), and \(\epsilon = 0.01\).Footnote 10

7.2 Binary classification using SVM

We perform experiments on several classifiers: linear SVM (denoted L-SVM), SVM with polynomial kernels of degree q (denoted poly-SVM (q)), and SVM with RBF kernel with a width parameter \(\sigma ^2\) (RBF-SVM(\(\sigma ^2\))). To train the classifiers, we use the efficient Liblinear (Fan et al. 2008) and LibSVM (Chang and Lin 2011) implementations, and we fix the regularization parameters using a cross-validation procedure.

We first consider a classification task on the MNIST handwritten digits dataset (LeCun et al. 1998). We consider a digit “4” versus digit “5” binary classification task, with 2000 and 1000 randomly chosen images for training and testing, respectively. In addition, a small random translation (of at most 3 pixels horizontally and vertically) is applied to all images, and the images are normalized to be of unit Euclidean norm. Table 2 reports the accuracy of the different classifiers, and their robustness to adversarial and random perturbations. Despite the fact that L-SVM performs fairly well on this classification task (both on training and testing), it is highly non robust to small adversarial perturbations. Indeed, \(\widehat{\rho }_{\text {adv}} (f)\) is one order of magnitude smaller than \(\kappa = 0.72\). Visually, this translates to an adversarial perturbation that is hardly perceptible (see Fig. 6 for illustrative examples). The instability of the linear classifier to adversarial perturbations is not surprising in the light of Theorem 1, as the distinguishability term \(\frac{1}{2} \Vert \mathbb {E}_{\mu _1} (x) - \mathbb {E}_{\mu _{-1}} (x) \Vert _2\) is small (see Table 4). In addition to improving the accuracy, the more flexible classifiers are also more robust to adversarial perturbations, as predicted by our theoretical analysis. That is, the third order classifier is slightly more robust than the second order one, and RBF-SVM with small width \(\sigma ^2 = 0.1\) is more robust than with \(\sigma ^2 = 1\). Note that \(\sigma \) controls the flexibility of the classifier in a similar way to the degree in the polynomial kernel. Interestingly, in this relatively easy classification task, RBF-SVM(0.1) achieves both a good performance, and a high robustness to adversarial perturbations. Concerning the robustness to random uniform noise, the results in Table 2 confirm the large gap between adversarial and random robustness for the linear classifier, as predicted by Theorem 2. Moreover, the results suggest that this gap is maintained for polynomial SVM. Figure 6 illustrates the robustness of the different classifiers on an example image.

Table 2 Training and testing accuracy of different models, and robustness to adversarial noise for the MNIST task
Fig. 6
figure 6

Original image a and minimally perturbed images (b)–(f) that switch the estimated label of linear (b), quadratic (c), cubic (d), RBF(1) (e), RBF(0.1) (f) classifiers. The image in g corresponds to the original image perturbed with a random uniform noise of norm \(\varDelta _{\text {unif}, \epsilon } (x; f)\), where f is the learned linear classifier. That is, the linear classifier gives the same label to a and g, with high probability. The norms of the perturbations are reported in each case. b \(\varDelta _{\mathrm{adv}}=0.08\), c \(\varDelta _{\mathrm{adv}}=0.19\), d \(\varDelta _{\mathrm{adv}}=0.21\), e \(\varDelta _{\mathrm{adv}}=0.15\), f \(\varDelta _{\mathrm{adv}}=0.41\), g \(\varDelta _{\mathrm{unif},\epsilon }=0.8\)

We now turn to a natural image classification task, with images taken from the CIFAR-10 database (Krizhevsky and Hinton 2009). The database contains 10 classes of \(32 \times 32\) RGB images. We restrict the dataset to the first two classes (“airplane” and “automobile”), and consider a subset of the original data, with 1000 images for training, and 1000 for testing. Moreover, all images are normalized to be of unit Euclidean norm. Compared to the first dataset, this task is more difficult, as the variability of the images is much larger than for digits. We report the results in Table 3. It can be seen that all classifiers are not robust to adversarial perturbations for this experiment, as \(\rho _{\text {adv}} (f) \ll \kappa = 0.39\). Despite that, all classifiers (except L-SVM) achieve an accuracy around 85%, and a training accuracy above 92%, and are robust to uniform random noise. Figure 7 illustrates the robustness to adversarial and random noise of the learned classifiers, on an example image of the dataset. Compared to the digits dataset, the distinguishability measures for this task are smaller (see Table 4). Our theoretical analysis therefore predicts a lower limit on the adversarial robustness of linear and quadratic classifiers for this task (even though the bound for quadratic classifiers is far from the achieved robustness of poly-SVM(2) in this example).

Table 3 Training and testing accuracy of different models, and robustness to adversarial noise for the CIFAR task
Fig. 7
figure 7

Same as Fig. 6, but for the “airplane” versus “automobile” classification task. a Original image, b \(\varDelta _{\mathrm{adv}}=0.04\), c \(\varDelta _{\mathrm{adv}}=0.02\), d \(\varDelta _{\mathrm{adv}}=0.03\), e \(\varDelta _{\mathrm{adv}}=0.03\), f \(\varDelta _{\mathrm{adv}}=0.05\), g \(\varDelta _{\mathrm{unif},\epsilon }=0.8\)

Table 4 The parameter \(\kappa \), and distinguishability measures for the two classification tasks

The instability of all classifiers to adversarial perturbations on this task suggests that the essence of the classification task was not correctly captured by these classifiers, even if a fairly good test accuracy is reached. To reach better robustness, two possibilities exist: use a more flexible family of classifiers (as our theoretical results suggest that more flexible families of classifiers achieve better robustness), or use a better training algorithm for the tested nonlinear classifiers. The latter solution seems possible, as the theoretical limit for quadratic classifiers suggests that there is still room to improve the robustness of these classifiers.

7.3 Multiclass classification using CNN

Since our theoretical results suggest that more flexible classifiers achieve better robustness to adversarial perturbations in the binary case, we now explore empirically whether the same intuitions hold in scenarios that depart from the theory in two different ways: (i) we consider multiclass classification problems, and (ii) we consider convolutional neural network architectures. While classifiers’ flexibility is relatively well quantified for polynomial classifiers by the degree of the polynomials, this is not straightforward to do for neural network architectures. In this section, we examine the effect of breadth and depth on the robustness to adversarial perturbations of classifiers.

We perform experiments on the multiclass CIFAR-10 classification task, and use the recent method in Moosavi-Dezfooli et al. (2016) to compute adversarial examples in the multiclass case. We focus on baseline CNN classifiers, and learn architectures with 1, 2 and 3 hidden layers. Specifically, each layer consists of a successive combination of convolutional, rectified linear units and pooling operations. The convolutional layers consist of 5 \(\times \) 5 filters with 50 feature maps for each layer, and the pooling operations are done on a window of size \(3\times 3\) with a stride parameter of 2. We build the three architectures gradually, by successively stacking a new hidden layer on top of the previous architecture (kept fixed). The last hidden layer is then connected to a fully connected layer, and the softmax loss is used. All architectures are trained with stochastic gradient descent. To provide a fair comparison of the different classifiers, all three classifiers have approximately similar classification error (35%). To ensure similar accuracies, we perform an early stop of the training procedure when necessary. The empirical normalized robustness to adversarial perturbations of the three networks are compared in Fig. 8a.Footnote 11

Fig. 8
figure 8

Evolution of the normalized robustness of classifiers with respect to a the depth of a CNN for CIFAR-10 task, and b the number of feature maps

We observe first that increasing the depth of the network leads to a significant increase in the robustness to adversarial perturbations, especially from 1 to 2 layers. The depth of a neural network has an important impact on the robustness of the classifier, just like the degree of a polynomial classifier is an important factor for the robustness. Going from 2 to 3 layers however seems to have a marginal effect on the robustness. It should be noted that, despite the increase of the robustness with the depth, the normalized robustness computed for all classifiers is relatively small, which suggests that none of these classifiers is really robust to adversarial perturbations. Note also that the results in Fig. 8a showing an increase of the robustness with the depth are inline with recent results showing that depth provides robustness to adversarial geometric transformations (Fawzi and Frossard 2015). In Fig. 8b, we show the effect of the number of feature maps in the CNN (for a one layer CNN) on the estimated normalized robustness to adversarial perturbations. Unlike the effect of depth, we observe that the number of feature maps has barely any effect on the robustness to adversarial perturbations. Finally, a comparison of the normalized robustness measures of very deep networks VGG-16 and VGG-19 (Simonyan and Zisserman 2014) on ImageNet shows that these two networks behave very similarly in terms of robustness (both achieve a normalized robustness of \(3 \cdot 10^{-3}\)). This experiment, along with the experiment in Fig. 8a, empirically suggest that adding layers on top of shallow network helps in terms of adversarial robustness, but if the depth of the network is already sufficiently large, then adding layers only moderately changes that robustness.

8 Discussion and perspectives

In this paper, we provided a quantitative analysis of the robustness of classifiers to adversarial perturbations, and showed the existence of upper limits on the adversarial robustness of classifiers. We showed that for the family of linear classifiers, the established limit is very small for most problems of interest. Hence, linear classifiers are usually not robust to adversarial noise (even though robustness to random noise might be achieved). Linear classifiers are, however, seldom used directly on the input/pixel space. Instead, the features of the image (e.g., SIFT features Lowe 2004 or features resulting from the first layers of a convolutional neural network) are first computed, and only then fed to a linear classifier. While our bounds (in Sect. 5) can be directly applied in the feature space, such results would be difficult to interpret as they do not translate easily to the input space. In fact, the feature mapping is usually non bijective (and non-smooth), which implies that the robustness of the linear classifier might significantly differ from the robustness of the overall classification system. Besides, using the \(\ell _2\) metric in the feature space might not be adapted to measure the robustness of the system.

Towards the goal of studying more realistic classifiers, we studied the robustness of quadratic classifiers, and provided a general result that is (in theory) applicable to a large set of classification functions (Lemma 1). Our results for quadratic classifiers show that the limit on the robustness for the family of quadratic classifier is usually larger than for linear classifiers, which gives hope to have classifiers that are robust to adversarial perturbations. In fact, by using an appropriate training procedure, it might be possible to get closer to the theoretical bound. For general nonlinear classifiers (e.g., neural networks), designing training procedures that specifically take into account the robustness in the learning is an important future work. We also believe that the application of our general upper bound in Lemma 1 to derive explicit upper bounds that are specific to e.g., deep neural networks is an important future work. To do that, we believe that it is important to derive explicitly the parameters \((\tau , \gamma )\) of assumption (A) for the class of functions under consideration. Even though this problem is still open, results from algebraic geometry seem to suggest that establishing such bounds might be possible for general classes of functions (e.g., piecewise linear functions). In addition, experimental results suggest that, unlike the breadth of the neural network, the depth plays a crucial role in the adversarial robustness. Identifying an upper bound on the adversarial robustness of deep neural networks in terms of the depth of the network would be a great step towards having a better understanding of such systems.