robustness.attack_steps module¶
For most use cases, this can just be considered an internal class and ignored.
This module contains the abstract class AttackerStep as well as a few subclasses.
AttackerStep is a generic way to implement optimizers specifically for use with
robustness.attacker.AttackerModel
. In general, except for when you want
to create a custom optimization method, you probably do not need to
import or edit this module and can just think of it as internal.
-
class
robustness.attack_steps.
AttackerStep
(orig_input, eps, step_size, use_grad=True)¶ Bases:
object
Generic class for attacker steps, under perturbation constraints specified by an “origin input” and a perturbation magnitude. Must implement project, step, and random_perturb
Initialize the attacker step with a given perturbation magnitude.
Parameters: - eps (float) – the perturbation magnitude
- orig_input (ch.tensor) – the original input
-
project
(x)¶ Given an input x, project it back into the feasible set
Parameters: x (ch.tensor) – the input to project back into the feasible set. Returns: A ch.tensor that is the input projected back into the feasible set, that is, \[\min_{x' \in S} \|x' - x\|_2\]
-
step
(x, g)¶ Given a gradient, make the appropriate step according to the perturbation constraint (e.g. dual norm maximization for \(\ell_p\) norms).
Parameters: g (ch.tensor) – the raw gradient Returns: The new input, a ch.tensor for the next step.
-
random_perturb
(x)¶ Given a starting input, take a random step within the feasible set
-
to_image
(x)¶ Given an input (which may be in an alternative parameterization), convert it to a valid image (this is implemented as the identity function by default as most of the time we use the pixel parameterization, but for alternative parameterizations this functino must be overriden).
-
class
robustness.attack_steps.
LinfStep
(orig_input, eps, step_size, use_grad=True)¶ Bases:
robustness.attack_steps.AttackerStep
Attack step for \(\ell_\infty\) threat model. Given \(x_0\) and \(\epsilon\), the constraint set is given by:
\[S = \{x | \|x - x_0\|_\infty \leq \epsilon\}\]Initialize the attacker step with a given perturbation magnitude.
Parameters: - eps (float) – the perturbation magnitude
- orig_input (ch.tensor) – the original input
-
project
(x)¶
-
step
(x, g)¶
-
random_perturb
(x)¶
-
class
robustness.attack_steps.
L2Step
(orig_input, eps, step_size, use_grad=True)¶ Bases:
robustness.attack_steps.AttackerStep
Attack step for \(\ell_\infty\) threat model. Given \(x_0\) and \(\epsilon\), the constraint set is given by:
\[S = \{x | \|x - x_0\|_2 \leq \epsilon\}\]Initialize the attacker step with a given perturbation magnitude.
Parameters: - eps (float) – the perturbation magnitude
- orig_input (ch.tensor) – the original input
-
project
(x)¶
-
step
(x, g)¶
-
random_perturb
(x)¶
-
class
robustness.attack_steps.
UnconstrainedStep
(orig_input, eps, step_size, use_grad=True)¶ Bases:
robustness.attack_steps.AttackerStep
Unconstrained threat model, \(S = [0, 1]^n\).
Initialize the attacker step with a given perturbation magnitude.
Parameters: - eps (float) – the perturbation magnitude
- orig_input (ch.tensor) – the original input
-
project
(x)¶
-
step
(x, g)¶
-
random_perturb
(x)¶
-
class
robustness.attack_steps.
FourierStep
(orig_input, eps, step_size, use_grad=True)¶ Bases:
robustness.attack_steps.AttackerStep
Step under the Fourier (decorrelated) parameterization of an image.
See https://distill.pub/2017/feature-visualization/#preconditioning for more information.
Initialize the attacker step with a given perturbation magnitude.
Parameters: - eps (float) – the perturbation magnitude
- orig_input (ch.tensor) – the original input
-
project
(x)¶
-
step
(x, g)¶
-
random_perturb
(x)¶
-
to_image
(x)¶
-
class
robustness.attack_steps.
RandomStep
(*args, **kwargs)¶ Bases:
robustness.attack_steps.AttackerStep
Step for Randomized Smoothing.
-
project
(x)¶
-
step
(x, g)¶
-
random_perturb
(x)¶
-