trw.arch

Submodules

Package Contents

Classes

ReLUConvBN2d

Stack of relu-conv-bn

ReduceChannels2d

Base class for all neural network modules.

Identity

Identity module

Zero2d

zero by stride

DilConv2d

relu-dilated conv-bn

SepConv2d

implemented separate convolution via pytorch groups parameters

Cell

Base class for all neural network modules.

SpecialParameter

Tag a parameter as special such as DARTS parameter. These should be handled differently

Functions

default_cell_output(node_outputs, nb_outputs_to_use=4)

create_darts_optimizers_fn(datasets, model, optimizer_fn, darts_weight_dataset_name, scheduler_fn=None)

Create an optimizer and scheduler for DARTS architecture search.

create_darts_adam_optimizers_fn(datasets, model, darts_weight_dataset_name, learning_rate, scheduler_fn=None)

Create an ADAM optimizer and scheduler for DARTS architecture search.

Attributes

DARTS_PRIMITIVES_2D

trw.arch.DARTS_PRIMITIVES_2D
class trw.arch.ReLUConvBN2d(C_in, C_out, kernel_size, stride, padding, affine=True)

Bases: torch.nn.Module

Stack of relu-conv-bn

forward(self, x)
class trw.arch.ReduceChannels2d(C_in, C_out, affine=True)

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Variables

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(self, x)
class trw.arch.Identity

Bases: torch.nn.Module

Identity module

forward(self, x)
class trw.arch.Zero2d(stride)

Bases: torch.nn.Module

zero by stride

forward(self, x)
class trw.arch.DilConv2d(C_in, C_out, kernel_size, stride, padding, dilation, affine=True)

Bases: torch.nn.Module

relu-dilated conv-bn

forward(self, x)
class trw.arch.SepConv2d(C_in, C_out, kernel_size, stride, padding, affine=True)

Bases: torch.nn.Module

implemented separate convolution via pytorch groups parameters

forward(self, x)
class trw.arch.Cell(primitives, cpp, cp, c, is_reduction, is_reduction_prev, internal_nodes=4, cell_merge_output_fn=default_cell_output, weights=None, with_preprocessing=True, genotype=None)

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Variables

training (bool) – Boolean represents whether this module is in training or evaluation mode.

_create_weights(self, primitives, weights)

Create the weights. Do not store them directly in the model parameters else they will be optimized too!

forward(self, parents)
get_weights(self)
Returns

The primitive weights for this cell. This is useful if we want to share the weights among multiple cells

get_genotype(self)
Returns

The genotype of the cell given the current primitive weighting

trw.arch.default_cell_output(node_outputs, nb_outputs_to_use=4)
class trw.arch.SpecialParameter

Bases: torch.nn.Parameter

Tag a parameter as special such as DARTS parameter. These should be handled differently depending on the phase: training the DARTS cell parameters or the weight parameters

trw.arch.create_darts_optimizers_fn(datasets, model, optimizer_fn, darts_weight_dataset_name, scheduler_fn=None)

Create an optimizer and scheduler for DARTS architecture search.

In particular, parameters that are derived from trw.arch.SpecialParameter will be handled differently:

  • for each dataset that is not equal to darts_weight_dataset_name, optimize all the parameters not

    derived from trw.arch.SpecialParameter

  • on the dataset darts_weight_dataset_name, ONLY the parameters derived from trw.arch.SpecialParameter

    will be optimized

Note

if model is an instance of`ModuleDict`, then the optimizer will only consider the parameters model[dataset_name].parameters() else model.parameters()

Parameters
  • datasets – a dictionary of dataset

  • model – the model. Should be a Module or a ModuleDict

  • optimizer_fn – the functor to instantiate the optimizer

  • scheduler_fn – the functor to instantiate the scheduler. May be None, in that case there will be no scheduler

  • darts_weight_dataset_name – this specifies the dataset to be used to train the DARTS cell weights. Only the parameters of the model derived from trw.arch.SpecialParameter will be optimized on the dataset darts_weight_dataset_name

Returns

a dict of optimizers, one per dataset

trw.arch.create_darts_adam_optimizers_fn(datasets, model, darts_weight_dataset_name, learning_rate, scheduler_fn=None)

Create an ADAM optimizer and scheduler for DARTS architecture search.

Parameters
  • datasets – a dictionary of dataset

  • model – a model to optimize

  • learning_rate – the initial learning rate

  • scheduler_fn – a scheduler, or None

  • darts_weight_dataset_name – this specifies the dataset to be used to train the DARTS cell weights. Only the parameters of the model derived from trw.arch.SpecialParameter will be optimized on the dataset darts_weight_dataset_name

Returns

An optimizer