robustness.loaders module

robustness.loaders.make_loaders(workers, batch_size, transforms, data_path, data_aug=True, custom_class=None, dataset='', label_mapping=None, subset=None, subset_type='rand', subset_start=0, val_batch_size=None, only_val=False, shuffle_train=True, shuffle_val=True, seed=1)

INTERNAL FUNCTION

This is an internal function that makes a loader for any dataset. You probably want to call dataset.make_loaders for a specific dataset, which only requires workers and batch_size. For example:

>>> cifar_dataset = CIFAR10('/path/to/cifar')
>>> train_loader, val_loader = cifar_dataset.make_loaders(workers=10, batch_size=128)
>>> # train_loader and val_loader are just PyTorch dataloaders
class robustness.loaders.PerEpochLoader(loader, func, do_tqdm=True)

Bases: object

A blend between TransformedLoader and LambdaLoader: stores the whole loader in memory, but recomputes it from scratch every epoch, instead of just once at initialization.

compute_loader()
class robustness.loaders.LambdaLoader(loader, func)

Bases: object

This is a class that allows one to apply any given (fixed) transformation to the output from the loader in real-time.

For instance, you could use for applications such as custom data augmentation and adding image/label noise.

Note that the LambdaLoader is the final transformation that is applied to image-label pairs from the dataset as part of the loading process—i.e., other (standard) transformations such as data augmentation can only be applied before passing the data through the LambdaLoader.

For more information see our detailed walkthrough

Parameters:
  • loader (PyTorch dataloader) – loader for dataset (required).
  • func (function) – fixed transformation to be applied to every batch in real-time (required). It takes in (images, labels) and returns (images, labels) of the same shape.
robustness.loaders.TransformedLoader(loader, func, transforms, workers=None, batch_size=None, do_tqdm=False, augment=False, fraction=1.0)

This is a function that allows one to apply any given (fixed) transformation to the output from the loader once.

For instance, you could use for applications such as assigning random labels to all the images (before training).

The TransformedLoader also supports the application of addiotional transformations (such as standard data augmentation) after the fixed function.

For more information see our detailed walkthrough

Parameters:
  • loader (PyTorch dataloader) – loader for dataset
  • func (function) – fixed transformation to be applied once. It takes
  • in (images, labels) and returns (images, labels) –
  • every dimension except for the first, i.e., batch dimension (in) –
  • can be any length) ((which) –
  • transforms (torchvision.transforms) – transforms to apply to the training images from the dataset (after func) (required).
  • workers (int) – number of workers for data fetching (required).
  • batch_size (int) – batch size for the data loaders (required).
  • do_tqdm (bool) – if True, show a tqdm progress bar for the attack.
  • augment (bool) – if True, the output loader contains both the original (untransformed), and new transformed image-label pairs.
  • fraction (float) – fraction of image-label pairs in the output loader which are transformed. The remainder is just original image-label pairs from loader.
Returns:

A loader and validation loader according to the parameters given. These are standard PyTorch data loaders, and thus can just be used via:

>>> output_loader = ds.make_loaders(loader,
                                    assign_random_labels,
                                    workers=8,
                                    batch_size=128)
>>> for im, lab in output_loader:
>>>     # Do stuff...