trw.layers

Submodules

Package Contents

Classes

Flatten

Flatten a tensor

ShiftScale

Normalize a tensor with a mean and standard deviation

UNet_2d

Base class for all neural network modules.

Functions

div_shape(shape, div=2)

Divide the shape by a constant

flatten

denses(sizes, dropout_probability=None, with_batchnorm=False, batchnorm_momentum=0.1, activation=nn.ReLU, last_layer_is_output=False, with_flatten=True)

param sizes

convs_2d(channels, convolution_kernels=(5, 5), strides=(1, 1), pooling_size=(2, 2), convolution_repeats=None, activation=nn.ReLU, with_flatten=False, dropout_probability=None, with_batchnorm=False, with_lrn=False, lrn_size=2, batchnorm_momentum=0.1, padding='same')

param channels

convs_3d

trw.layers.div_shape(shape, div=2)

Divide the shape by a constant

Parameters
  • shape – the shape

  • div – a divisor

Returns

a list

class trw.layers.Flatten

Bases: torch.nn.Module

Flatten a tensor

For example, a tensor of shape[N, Z, Y, X] will be reshaped [N, Z * Y * X]

forward(self, x)
Parameters

x – a tensor

Returns: return a flattened tensor

trw.layers.flatten(x)

Flatten a tensor

Example, a tensor of shape[N, Z, Y, X] will be reshaped [N, Z * Y * X]

Parameters

x – a tensor

Returns: return a flattened tensor

trw.layers.denses(sizes, dropout_probability=None, with_batchnorm=False, batchnorm_momentum=0.1, activation=nn.ReLU, last_layer_is_output=False, with_flatten=True)
Parameters
  • sizes

  • dropout_probability

  • with_batchnorm

  • batchnorm_momentum

  • activation

  • last_layer_is_output – This must be set to True if the last layer of dense is actually an output. If the last layer is an output, we should not add batch norm, dropout or activation of the last nn.Linear

  • with_flatten – if True, the input will be flattened

Returns:

trw.layers.convs_2d(channels, convolution_kernels=(5, 5), strides=(1, 1), pooling_size=(2, 2), convolution_repeats=None, activation=nn.ReLU, with_flatten=False, dropout_probability=None, with_batchnorm=False, with_lrn=False, lrn_size=2, batchnorm_momentum=0.1, padding='same')
Parameters
  • channels

  • convolution_kernels

  • strides

  • pooling_size

  • convolution_repeats

  • activation

  • with_flatten

  • dropout_probability

  • with_batchnorm

  • batchnorm_momentum

  • with_lrn

  • lrn_size

  • padding (str) – if same, the convolution will be padded with zeros to keep the output shape the same as the input shape

Returns:

trw.layers.convs_3d(channels, convolution_kernels=(5, 5, 5), strides=(1, 1, 1), pooling_size=(2, 2, 2), convolution_repeats=None, activation=nn.ReLU, with_flatten=False, dropout_probability=None, with_batchnorm=False, with_lrn=False, lrn_size=2, batchnorm_momentum=0.1, padding='same')
Parameters
  • channels

  • convolution_kernels

  • strides

  • pooling_size

  • convolution_repeats

  • activation

  • with_flatten

  • dropout_probability

  • with_batchnorm

  • with_lrn

  • lrn_size

  • batchnorm_momentum

  • padding (str) – if same, the convolution will be padded with zeros to keep the output shape the same as the input shape

Returns:

class trw.layers.ShiftScale(mean, standard_deviation)

Bases: torch.nn.Module

Normalize a tensor with a mean and standard deviation

The output tensor will be (x - mean) / standard_deviation

This layer simplify the preprocessing for the trw.simple_layers package

forward(self, x)
Parameters

x – a tensor

Returns: return a flattened tensor

class trw.layers.UNet_2d(in_channels=1, n_classes=2, depth=5, wf=6, padding=True, batch_norm=False, up_mode='upconv')

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Variables

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(self, x)