trw.layers

Submodules

Package Contents

Classes

OpsConversion

Helper to create standard N-d operations

LayerConfig

Generic configuration of the layers_legacy

NormType

Representation of the normalization layer

BlockConvNormActivation

Base class for all neural network modules.

BlockDeconvNormActivation

Base class for all neural network modules.

BlockUpDeconvSkipConv

Base class for all neural network modules.

BlockPool

Base class for all neural network modules.

BlockRes

Original Residual block design

BlockConv

Base class for all neural network modules.

BlockSqueezeExcite

Squeeze-and-excitation block

ConvBlockType

Base class for protocol classes. Protocol classes are defined as:

BlockMerge

Merge multiple layers (e.g., concatenate, sum...)

Flatten

Flatten a tensor

ConvsBase

Base class for all neural network modules.

ModuleWithIntermediate

Represent a module with intermediate results

ShiftScale

Normalize a tensor with a mean and standard deviation

SubTensor

Select a region of a tensor (without copy), excluded the first component (N)

ConvsTransposeBase

Helper class to create sequence of transposed convolution

UNetBase

Configurable UNet-like architecture

FullyConvolutional

Construct a Fully Convolutional Neural network from a base model. This provides pixel level interpolation

AutoencoderConvolutional

Convolutional autoencoder

AutoencoderConvolutionalVariational

Variational convolutional autoencoder implementation

AutoencoderConvolutionalVariationalConditional

Conditional Variational convolutional auto-encoder implementation

Gan

Generic GAN implementation. Support conditional GANs.

GanDataPool

EncoderDecoderResnet

Base class for all neural network modules.

DeepSupervision

Apply a deep supervision layer to help the flow of gradient reach top level layers.

BackboneDecoder

U-net like model with backbone used as encoder.

EfficientNet

Generic EfficientNet that takes in the width and depth scale factors and scales accordingly.

MBConvN

MBConv with an expansion factor of N, plus squeeze-and-excitation

PreActResNet

Pre-activation Resnet model

BlockNonLocal

Non local block implementation of [1]

Functions

default_layer_config(dimensionality: Optional[int] = None, norm_type: Optional[NormType] = NormType.BatchNorm, norm_kwargs: Dict = {}, pool_type: Optional[PoolType] = PoolType.MaxPool, pool_kwargs: Dict = {}, activation: Optional[Any] = nn.ReLU, activation_kwargs: Dict = {}, dropout_type: Optional[DropoutType] = DropoutType.Dropout1d, dropout_kwargs: Dict = {}, conv_kwargs: Dict = {'padding': 'same'}, deconv_kwargs: Dict = {'padding': 'same'}) → LayerConfig

Default layer configuration

div_shape(shape: Union[Sequence[int], int], div: int = 2) → Union[Sequence[int], int]

Divide the shape by a constant

denses(sizes: Sequence[int], dropout_probability: float = None, activation: Any = nn.ReLU, normalization_type: Optional[trw.layers.layer_config.NormType] = NormType.BatchNorm, last_layer_is_output: bool = False, with_flatten: bool = True, config: trw.layers.layer_config.LayerConfig = default_layer_config(dimensionality=None)) → torch.nn.Module

param sizes

the size of the linear layers_legacy. The format is [linear1_input, linear1_output, ..., linearN_output]

convs_2d(input_channels: int, channels: Sequence[int], convolution_kernels: trw.basic_typing.ConvKernels = 5, strides: trw.basic_typing.ConvStrides = 1, pooling_size: Optional[trw.basic_typing.PoolingSizes] = 2, convolution_repeats: Union[int, Sequence[int]] = 1, activation: trw.basic_typing.Activation = nn.ReLU, padding: trw.basic_typing.Paddings = 'same', with_flatten: bool = False, dropout_probability: Optional[float] = None, norm_type: Optional[trw.layers.layer_config.NormType] = None, norm_kwargs: Dict[str, Any] = {}, pool_kwargs: Dict[str, Any] = {}, last_layer_is_output: bool = False, conv_block_fn: trw.layers.blocks.ConvBlockType = BlockConvNormActivation, config: trw.layers.layer_config.LayerConfig = default_layer_config(dimensionality=None))

param input_channels

the number of input channels

convs_3d(input_channels: int, channels: List[int], convolution_kernels: trw.basic_typing.ConvKernels = 5, strides: trw.basic_typing.ConvStrides = 1, pooling_size: trw.basic_typing.PoolingSizes = 2, convolution_repeats: Union[int, Sequence[int]] = 1, activation: trw.basic_typing.Activation = nn.ReLU, padding: trw.basic_typing.Paddings = 'same', with_flatten: bool = False, dropout_probability: Optional[float] = None, norm_type: Optional[trw.layers.layer_config.NormType] = None, norm_kwargs: Dict[str, Any] = {}, pool_kwargs: Dict[str, Any] = {}, last_layer_is_output: bool = False, conv_block_fn: trw.layers.blocks.ConvBlockType = BlockConvNormActivation, config: trw.layers.layer_config.LayerConfig = default_layer_config(dimensionality=None))

param input_channels

the number of input channels

crop_or_pad_fun(x: torch.Tensor, shape: Sequence[int], padding_default_value=0) → torch.Tensor

Crop or pad a tensor to the specified shape (N and C excluded)

linear_embedding(config: trw.layers.layer_config.LayerConfig, input_channels: int, output_channels: int) → torch.nn.Module

Attributes

PreActResNet18

PreActResNet34

UNetAttention

class trw.layers.OpsConversion(upsample_mode: typing_extensions.Literal[nearest, linear] = 'linear')

Helper to create standard N-d operations

set_dim(self, dim: int)
class trw.layers.LayerConfig(ops: trw.layers.ops_conversion.OpsConversion, norm_type: Optional[NormType] = NormType.BatchNorm, norm_kwargs: Dict = {}, pool_type: Optional[PoolType] = PoolType.MaxPool, pool_kwargs: Dict = {}, activation: Optional[Any] = nn.ReLU, activation_kwargs: Dict = {}, dropout_type: Optional[DropoutType] = DropoutType.Dropout1d, dropout_kwargs: Dict = {}, conv_kwargs: Dict = {'padding': 'same'}, deconv_kwargs: Dict = {'padding': 'same'})

Generic configuration of the layers_legacy

set_dim(self, dimensionality: int)
trw.layers.default_layer_config(dimensionality: Optional[int] = None, norm_type: Optional[NormType] = NormType.BatchNorm, norm_kwargs: Dict = {}, pool_type: Optional[PoolType] = PoolType.MaxPool, pool_kwargs: Dict = {}, activation: Optional[Any] = nn.ReLU, activation_kwargs: Dict = {}, dropout_type: Optional[DropoutType] = DropoutType.Dropout1d, dropout_kwargs: Dict = {}, conv_kwargs: Dict = {'padding': 'same'}, deconv_kwargs: Dict = {'padding': 'same'}) LayerConfig

Default layer configuration

Parameters
  • dimensionality – the number of dimensions of the input (without the N and C components)

  • norm_type – the type of normalization

  • norm_kwargs – additional normalization parameters

  • activation – the activation

  • activation_kwargs – additional activation parameters

  • dropout_kwargs – if not None, dropout parameters

  • conv_kwargs – additional parameters for the convolutional layer

  • deconv_kwargs – additional arguments for the transposed convolutional layer

  • pool_type – the type of pooling

  • pool_kwargs – additional parameters for the pooling layers_legacy

  • dropout_type – the type of dropout

class trw.layers.NormType

Bases: enum.Enum

Representation of the normalization layer

BatchNorm = BatchNorm
InstanceNorm = InstanceNorm
GroupNorm = GroupNorm
SyncBatchNorm = SyncBatchNorm
LocalResponseNorm = LocalResponseNorm
class trw.layers.BlockConvNormActivation(config: trw.layers.layer_config.LayerConfig, input_channels: int, output_channels: int, *, kernel_size: Optional[trw.basic_typing.KernelSize] = None, padding: Optional[trw.basic_typing.Padding] = None, stride: Optional[trw.basic_typing.Stride] = None, padding_mode: Optional[str] = None, groups: int = 1, bias: Optional[bool] = None)

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Variables

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(self, x: torch.Tensor) torch.Tensor
class trw.layers.BlockDeconvNormActivation(config: trw.layers.layer_config.LayerConfig, input_channels: int, output_channels: int, *, kernel_size: Optional[trw.basic_typing.KernelSize] = None, padding: Optional[trw.basic_typing.Padding] = None, output_padding: Optional[Union[int, Sequence[int]]] = None, stride: Optional[trw.basic_typing.Stride] = None, padding_mode: Optional[str] = None)

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Variables

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(self, x: torch.Tensor) torch.Tensor
class trw.layers.BlockUpDeconvSkipConv(config: trw.layers.layer_config.LayerConfig, skip_channels: int, input_channels: int, output_channels: int, *, nb_repeats: int = 1, kernel_size: Optional[trw.basic_typing.KernelSize] = None, deconv_kernel_size: Optional[trw.basic_typing.KernelSize] = None, padding: Optional[trw.basic_typing.Padding] = None, output_padding: Optional[Union[int, Sequence[int]]] = None, deconv_block=BlockDeconvNormActivation, stride: Optional[trw.basic_typing.Stride] = None, merge_layer_fn=BlockMerge)

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Variables

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(self, skip: torch.Tensor, previous: torch.Tensor) torch.Tensor
class trw.layers.BlockPool(config: trw.layers.layer_config.LayerConfig, kernel_size: Optional[trw.basic_typing.KernelSize] = 2)

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Variables

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(self, x: torch.Tensor) torch.Tensor
class trw.layers.BlockRes(config: trw.layers.layer_config.LayerConfig, input_channels: int, *, kernel_size: Optional[trw.basic_typing.KernelSize] = None, padding: Optional[trw.basic_typing.Padding] = None, padding_mode: Optional[str] = None, base_block: ConvBlockType = BlockConvNormActivation)

Bases: torch.nn.Module

Original Residual block design

References

[1] “Deep Residual Learning for Image Recognition”, https://arxiv.org/abs/1512.03385

forward(self, x: trw.basic_typing.TorchTensorNCX) trw.basic_typing.TorchTensorNCX
class trw.layers.BlockConv(config: trw.layers.layer_config.LayerConfig, input_channels: int, output_channels: int, *, kernel_size: Optional[trw.basic_typing.KernelSize] = None, padding: Optional[trw.basic_typing.Padding] = None, stride: Optional[trw.basic_typing.Stride] = None, padding_mode: Optional[str] = None, groups: int = 1, bias: Optional[bool] = None)

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Variables

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(self, x: torch.Tensor) torch.Tensor
class trw.layers.BlockSqueezeExcite(config: trw.layers.layer_config.LayerConfig, input_channels: int, r: int = 24)

Bases: torch.nn.Module

Squeeze-and-excitation block

References

[1] “Squeeze-and-Excitation Networks”, https://arxiv.org/pdf/1709.01507.pdf

forward(self, x)
class trw.layers.ConvBlockType

Bases: typing_extensions.Protocol

Base class for protocol classes. Protocol classes are defined as:

class Proto(Protocol):
    def meth(self) -> int:
        ...

Such classes are primarily used with static type checkers that recognize structural subtyping (static duck-typing), for example:

class C:
    def meth(self) -> int:
        return 0

def func(x: Proto) -> int:
    return x.meth()

func(C())  # Passes static type check

See PEP 544 for details. Protocol classes decorated with @typing_extensions.runtime act as simple-minded runtime protocol that checks only the presence of given attributes, ignoring their type signatures.

Protocol classes can be generic, they are defined as:

class GenProto(Protocol[T]):
    def meth(self) -> T:
        ...
__call__(self, config: trw.layers.layer_config.LayerConfig, input_channels: int, output_channels: int, *, kernel_size: Optional[trw.basic_typing.KernelSize] = None, padding: Optional[trw.basic_typing.Padding] = None, stride: Optional[trw.basic_typing.Stride] = None, padding_mode: Optional[str] = None) torch.nn.Module
class trw.layers.BlockMerge(config: trw.layers.layer_config.LayerConfig, layer_channels: Sequence[int], mode: typing_extensions.Literal[concatenation, sum] = 'concatenation')

Bases: torch.nn.Module

Merge multiple layers (e.g., concatenate, sum…)

get_output_channels(self)
forward(self, layers: Sequence[torch.Tensor])
trw.layers.div_shape(shape: Union[Sequence[int], int], div: int = 2) Union[Sequence[int], int]

Divide the shape by a constant

Parameters
  • shape – the shape

  • div – a divisor

Returns

a list

class trw.layers.Flatten

Bases: torch.nn.Module

Flatten a tensor

For example, a tensor of shape[N, Z, Y, X] will be reshaped [N, Z * Y * X]

forward(self, x: torch.Tensor) torch.Tensor
Parameters

x – a tensor

Returns: return a flattened tensor

trw.layers.denses(sizes: Sequence[int], dropout_probability: float = None, activation: Any = nn.ReLU, normalization_type: Optional[trw.layers.layer_config.NormType] = NormType.BatchNorm, last_layer_is_output: bool = False, with_flatten: bool = True, config: trw.layers.layer_config.LayerConfig = default_layer_config(dimensionality=None)) torch.nn.Module
Parameters
  • sizes – the size of the linear layers_legacy. The format is [linear1_input, linear1_output, …, linearN_output]

  • dropout_probability – the probability of the dropout layer. If None, no dropout layer is added.

  • activation – the activation to be used

  • normalization_type – the normalization to be used between dense layers_legacy. If None, no normalization added

  • last_layer_is_output – This must be set to True if the last layer of dense is actually an output. If the last layer is an output, we should not add batch norm, dropout or activation of the last nn.Linear

  • with_flatten – if True, the input will be flattened

  • config – defines the available operations

Returns

a nn.Module

class trw.layers.ConvsBase(dimensionality: int, input_channels: int, *, channels: Sequence[int], convolution_kernels: trw.basic_typing.ConvKernels = 5, strides: trw.basic_typing.ConvStrides = 1, pooling_size: Optional[trw.basic_typing.PoolingSizes] = 2, convolution_repeats: Union[int, Sequence[int], trw.basic_typing.IntListList] = 1, activation: Optional[trw.basic_typing.Activation] = nn.ReLU, padding: trw.basic_typing.Paddings = 'same', with_flatten: bool = False, dropout_probability: Optional[float] = None, norm_type: Optional[trw.layers.layer_config.NormType] = None, norm_kwargs: Dict = {}, pool_kwargs: Dict = {}, activation_kwargs: Dict = {}, last_layer_is_output: bool = False, conv_block_fn: trw.layers.blocks.ConvBlockType = BlockConvNormActivation, config: trw.layers.layer_config.LayerConfig = default_layer_config(dimensionality=None))

Bases: torch.nn.Module, ModuleWithIntermediate

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Variables

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward_simple(self, x: torch.Tensor) torch.Tensor
forward_with_intermediate(self, x: torch.Tensor, **kwargs) List[torch.Tensor]
forward(self, x)
class trw.layers.ModuleWithIntermediate

Represent a module with intermediate results

abstract forward_with_intermediate(self, x: torch.Tensor, **kwargs) Sequence[torch.Tensor]
trw.layers.convs_2d(input_channels: int, channels: Sequence[int], convolution_kernels: trw.basic_typing.ConvKernels = 5, strides: trw.basic_typing.ConvStrides = 1, pooling_size: Optional[trw.basic_typing.PoolingSizes] = 2, convolution_repeats: Union[int, Sequence[int]] = 1, activation: trw.basic_typing.Activation = nn.ReLU, padding: trw.basic_typing.Paddings = 'same', with_flatten: bool = False, dropout_probability: Optional[float] = None, norm_type: Optional[trw.layers.layer_config.NormType] = None, norm_kwargs: Dict[str, Any] = {}, pool_kwargs: Dict[str, Any] = {}, last_layer_is_output: bool = False, conv_block_fn: trw.layers.blocks.ConvBlockType = BlockConvNormActivation, config: trw.layers.layer_config.LayerConfig = default_layer_config(dimensionality=None))
Parameters
  • input_channels – the number of input channels

  • channels – the number of channels

  • convolution_kernels – for each convolution group, the kernel of the convolution

  • strides – for each convolution group, the stride of the convolution

  • pooling_size – the pooling size to be inserted after each convolution group

  • convolution_repeats – the number of repeats of a convolution

  • activation – the activation function

  • with_flatten – if True, the last output will be flattened

  • dropout_probability – if None, not dropout. Else the probability of dropout after each convolution

  • padding – ‘same’ will add padding so that convolution output as the same size as input

  • last_layer_is_output – if True, the last convolution will NOT have activation, dropout, batch norm, LRN

  • norm_type – the normalization layer (e.g., BatchNorm)

  • norm_kwargs – additional arguments for normalization

  • pool_kwargs – additional argument for pool

  • conv_block_fn – the base blocks convolutional

  • config – defines the allowed operations

trw.layers.convs_3d(input_channels: int, channels: List[int], convolution_kernels: trw.basic_typing.ConvKernels = 5, strides: trw.basic_typing.ConvStrides = 1, pooling_size: trw.basic_typing.PoolingSizes = 2, convolution_repeats: Union[int, Sequence[int]] = 1, activation: trw.basic_typing.Activation = nn.ReLU, padding: trw.basic_typing.Paddings = 'same', with_flatten: bool = False, dropout_probability: Optional[float] = None, norm_type: Optional[trw.layers.layer_config.NormType] = None, norm_kwargs: Dict[str, Any] = {}, pool_kwargs: Dict[str, Any] = {}, last_layer_is_output: bool = False, conv_block_fn: trw.layers.blocks.ConvBlockType = BlockConvNormActivation, config: trw.layers.layer_config.LayerConfig = default_layer_config(dimensionality=None))
Parameters
  • input_channels – the number of input channels

  • channels – the number of channels

  • convolution_kernels – for each convolution group, the kernel of the convolution

  • strides – for each convolution group, the stride of the convolution

  • pooling_size – the pooling size to be inserted after each convolution group

  • convolution_repeats – the number of repeats of a convolution

  • activation – the activation function

  • with_flatten – if True, the last output will be flattened

  • dropout_probability – if None, not dropout. Else the probability of dropout after each convolution

  • padding – ‘same’ will add padding so that convolution output as the same size as input

  • last_layer_is_output – if True, the last convolution will NOT have activation, dropout, batch norm, LRN

  • norm_type – the normalization layer (e.g., BatchNorm)

  • norm_kwargs – additional arguments for normalization

  • pool_kwargs – additional argument for pool

  • conv_block_fn – the base blocks convolutional

  • config – defines the allowed operations

class trw.layers.ShiftScale(mean: Union[float, torch.Tensor], standard_deviation: Union[float, torch.Tensor], output_dtype: torch.dtype = torch.float32)

Bases: torch.nn.Module

Normalize a tensor with a mean and standard deviation

The output tensor will be (x - mean) / standard_deviation

This layer simplify the preprocessing for the trw.simple_layers package

forward(self, x: torch.Tensor) torch.Tensor
Parameters

x – a tensor

Returns: return a flattened tensor

trw.layers.crop_or_pad_fun(x: torch.Tensor, shape: Sequence[int], padding_default_value=0) torch.Tensor

Crop or pad a tensor to the specified shape (N and C excluded)

Parameters
  • x – the tensor shape

  • shape – the shape of x to be returned. N and C channels must not be specified

  • padding_default_value – the padding value to be used

Returns

torch.Tensor

class trw.layers.SubTensor(min_indices: Sequence[int], max_indices_exclusive: Sequence[int])

Bases: torch.nn.Module

Select a region of a tensor (without copy), excluded the first component (N)

forward(self, x: torch.Tensor) torch.Tensor
class trw.layers.ConvsTransposeBase(dimensionality: int, input_channels: int, channels: Sequence[int], *, convolution_kernels: trw.basic_typing.ConvKernels = 5, strides: trw.basic_typing.ConvStrides = 2, paddings: Optional[trw.basic_typing.Paddings] = None, activation: Any = nn.ReLU, activation_kwargs: Dict = {}, dropout_probability: Optional[float] = None, norm_type: Optional[trw.layers.convs.NormType] = None, norm_kwargs: Dict = {}, last_layer_is_output: bool = False, squash_function: Optional[Callable[[torch.Tensor], torch.Tensor]] = None, deconv_block_fn: trw.layers.blocks.ConvTransposeBlockType = BlockDeconvNormActivation, config: trw.layers.convs.LayerConfig = default_layer_config(dimensionality=None), target_shape: Optional[Sequence[int]] = None)

Bases: torch.nn.Module, trw.layers.convs.ModuleWithIntermediate

Helper class to create sequence of transposed convolution

This can be used to map an embedding back to image space.

forward_with_intermediate(self, x)
forward_simple(self, x)
forward(self, x)
class trw.layers.UNetBase(dim: int, input_channels: int, channels: Sequence[int], output_channels: int, down_block_fn: DownType = Down, up_block_fn: UpType = UpResize, init_block_fn: trw.layers.blocks.ConvBlockType = BlockConvNormActivation, middle_block_fn: MiddleType = partial(LatentConv, block=partial(BlockConvNormActivation, kernel_size=5)), output_block_fn: trw.layers.blocks.ConvBlockType = BlockConvNormActivation, init_block_channels: Optional[int] = None, latent_channels: Optional[int] = None, kernel_size: Optional[int] = 3, strides: Union[int, Sequence[int]] = 2, activation: Optional[Any] = nn.PReLU, config: trw.layers.layer_config.LayerConfig = default_layer_config(dimensionality=None), add_last_downsampling_to_intermediates: bool = False)

Bases: torch.nn.Module, trw.layers.convs.ModuleWithIntermediate

Configurable UNet-like architecture

_build(self, config, init_block_fn, down_block_fn, up_block_fn, middle_block_fn, output_block_fn, strides)
forward_with_intermediate(self, x: torch.Tensor, latent: Optional[torch.Tensor] = None, **kwargs) Sequence[torch.Tensor]
forward(self, x: torch.Tensor, latent: Optional[torch.Tensor] = None) torch.Tensor
Parameters
  • x – the input image

  • latent – a latent variable appended by the middle block

class trw.layers.FullyConvolutional(dimensionality: int, input_channels: int, base_model: trw.layers.convs.ModuleWithIntermediate, deconv_filters: Sequence[int], convolution_kernels: Union[int, Sequence[int]], strides: Union[int, Sequence[int]], activation=nn.ReLU, nb_classes: Optional[int] = None, concat_mode: str = 'add', conv_filters: Optional[Sequence[int]] = None, norm_type: trw.layers.layer_config.NormType = NormType.BatchNorm, norm_kwargs: Dict = {}, activation_kwargs: Dict = {}, deconv_block_fn: trw.layers.blocks.ConvTransposeBlockType = BlockDeconvNormActivation, config: trw.layers.layer_config.LayerConfig = default_layer_config(dimensionality=None))

Bases: torch.nn.Module

Construct a Fully Convolutional Neural network from a base model. This provides pixel level interpolation

Example of a 2D network taking 1 input channel with 3 convolutions (16, 32, 64) and 3 deconvolutions (32, 16, 8): >>> import torch >>> import trw >>> convs = trw.layers.ConvsBase(dimensionality=2, input_channels=1, channels=[16, 32, 64]) >>> fcnn = trw.layers.FullyConvolutional(dimensionality=2, base_model=convs, deconv_filters=[64, 32, 16, 8], convolution_kernels=7, strides=[2] * 3, nb_classes=2) >>> i = torch.zeros([5, 1, 32, 32], dtype=torch.float32) >>> o = fcnn(i)

The following intermediate data will be created (concat_mode=’add’): input = [None, 1, 32, 32] conv_1 = [None, 16, 16, 16] conv_2 = [None, 32, 8, 8] conv_3 = [None, 64, 4, 4]

deconv_1 = [None, 32, 8, 8] deconv_2 = [None, 16, 16, 16] deconv_3 = [None, 8, 32, 32] classifier = [None, 2, 32, 32]

forward(self, x: torch.Tensor) torch.Tensor
forward_with_intermediate(self, x: torch.Tensor) Tuple[torch.Tensor, Sequence[torch.Tensor]]
class trw.layers.AutoencoderConvolutional(dimensionality: int, input_channels: int, encoder_channels: Sequence[int], decoder_channels: Sequence[int], convolution_kernels: trw.basic_typing.ConvKernels = 5, encoder_strides: Union[trw.basic_typing.ConvStrides] = 1, decoder_strides: Union[trw.basic_typing.ConvStrides] = 2, pooling_size: Optional[trw.basic_typing.PoolingSizes] = 2, convolution_repeats: Union[int, Sequence[int]] = 1, activation: Optional[trw.basic_typing.Activation] = nn.ReLU, dropout_probability: Optional[float] = None, norm_type: trw.layers.layer_config.NormType = NormType.BatchNorm, norm_kwargs: Dict = {}, activation_kwargs: Dict = {}, last_layer_is_output: bool = False, force_decoded_size_same_as_input: bool = True, squash_function: Optional[Callable[[torch.Tensor], torch.Tensor]] = None, config: trw.layers.layer_config.LayerConfig = default_layer_config(dimensionality=None))

Bases: torch.nn.Module, trw.layers.convs.ModuleWithIntermediate

Convolutional autoencoder

Examples

Create an encoder taking 1 channel with [4, 8, 16] filters and a decoder taking as input 16 channels of 4x4 with [8, 4, 1] filters: >>> model = AutoencoderConvolutional(2, 1, [4, 8, 16], [8, 4, 1])

forward_simple(self, x: torch.Tensor) torch.Tensor
forward_with_intermediate(self, x: torch.Tensor, **kwargs) Tuple[torch.Tensor, torch.Tensor]
forward(self, x: torch.Tensor) torch.Tensor
class trw.layers.AutoencoderConvolutionalVariational(input_shape: Union[torch.Size, List[int], Tuple[int, Ellipsis]], encoder: torch.nn.Module, decoder: torch.nn.Module, z_size: int, input_type: torch.dtype = torch.float32)

Bases: torch.nn.Module

Variational convolutional autoencoder implementation

See good reference:

https://wiseodd.github.io/techblog/2016/12/10/variational-autoencoder/

encode(self, x)
forward(self, x)
static reparameterize(training, z_mu, z_logvar)

Use the reparameterization trick: we need to generate a random normal without interrupting the gradient propagation.

We only sample during training.

static loss_function(recon_x, x, mu, logvar, recon_loss_name='BCE', kullback_leibler_weight=0.2)

Loss function generally used for a variational auto-encoder

compute:

reconstruction_loss + Kullback_Leibler_weight * Kullback–Leibler divergence((mu, logvar), gaussian(0, 1))

Parameters
  • recon_x – the reconstructed x

  • x – the input value

  • mu – the mu encoding of x

  • logvar – the logvar encoding of x

  • recon_loss_name – the name of the reconstruction loss. Must be one of BCE (binary cross-entropy) or MSE (mean squared error) or L1

  • kullback_leibler_weight – the weight factor applied on the Kullback–Leibler divergence. This is to balance the importance of the reconstruction loss and the Kullback–Leibler divergence

Returns

a 1D tensor, representing a loss value for each x

sample(self, nb_samples)

Randomly sample from the latent space to generate random samples

Parameters

nb_samples – the number of samples to generate

Notes

the image may need to be cropped or padded to mach the learnt image shape

class trw.layers.AutoencoderConvolutionalVariationalConditional(input_shape: Union[torch.Size, List[int], Tuple[int, Ellipsis]], encoder: torch.nn.Module, decoder: torch.nn.Module, z_size: int, y_size: int, input_type=torch.float32)

Bases: torch.nn.Module

Conditional Variational convolutional auto-encoder implementation

Most of the implementation is shared with regular variational convolutional auto-encoder.

The main difference if the auto-encoder is conditioned on a variable y. The model learns a latent given y. In this implementation, the encoder is not using y, only the decoder is aware of it. This is done by concatenating the latent variable calculated by the encoder and y.

encode(self, x)
decode(self, mu, logvar, y, sample_parameters=None)
sample_given_y(self, y)
forward(self, x, y)
class trw.layers.Gan(discriminator, generator, latent_size, optimizer_discriminator_fn, optimizer_generator_fn, real_image_from_batch_fn, train_split_name='train', loss_from_outputs_fn=process_outputs_and_extract_loss, image_pool=None)

Bases: torch.nn.Module

Generic GAN implementation. Support conditional GANs.

Examples

  • generator conditioned by concatenating a one-hot attribute to the latent or conditioned

    by another image (e.g., using UNet)

  • discriminator conditioned by concatenating a one-hot image sized to the image

    or one-hot concatenated to intermediate layer

  • simple GAN (i.e., no observation)

Notes

Here the module will have its own optimizer. The trw.train.Trainer should have optimizers_fn set to None.

_generate_latent(self, nb_samples)
static _merge_generator_discriminator_outputs(generator_outputs, discriminator_real_outputs, discriminator_fake_outputs)
forward(self, batch)
class trw.layers.GanDataPool(pool_size, replacement_probability=0.5, insertion_probability=0.1)
get_data(self, batch, images_fake)
class trw.layers.EncoderDecoderResnet(dimensionality: int, input_channels: int, output_channels: int, encoding_channels: Sequence[int], decoding_channels: Sequence[int], *, nb_residual_blocks: int = 9, convolution_kernel: int = 3, encoding_strides: trw.basic_typing.ConvStrides = 2, decoding_strides: trw.basic_typing.ConvStrides = 2, activation: Optional[trw.basic_typing.Activation] = None, encoding_block: trw.layers.blocks.ConvBlockType = BlockConvNormActivation, decoding_block: trw.layers.blocks.ConvTransposeBlockType = BlockDeconvNormActivation, init_block=partial(BlockConvNormActivation, kernel_size=7), middle_block: Any = BlockRes, out_block=partial(BlockConvNormActivation, kernel_size=7), config: trw.layers.layer_config.LayerConfig = default_layer_config(conv_kwargs={'padding': 'same', 'bias': False, 'padding_mode': 'reflect'}, deconv_kwargs={'padding': 'same', 'bias': False}, norm_type=NormType.BatchNorm, activation=nn.ReLU))

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Variables

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(self, x: trw.basic_typing.TorchTensorNCX) trw.basic_typing.TorchTensorNCX
forward_with_intermediate(self, x: trw.basic_typing.TorchTensorNCX) List[trw.basic_typing.TorchTensorNCX]
class trw.layers.DeepSupervision(backbone: trw.layers.convs.ModuleWithIntermediate, input_target_shape: trw.basic_typing.ShapeCX, output_creator: OutputCreator = OutputSegmentation, output_block: trw.layers.blocks.ConvBlockType = BlockConvNormActivation, select_outputs_fn: Callable[[Sequence[trw.basic_typing.TorchTensorNCX]], Sequence[trw.basic_typing.TorchTensorNCX]] = select_third_to_last_skip_before_last, resize_mode: typing_extensions.Literal[nearest, linear] = 'linear', weighting_fn: Optional[Callable[[Sequence[trw.basic_typing.TorchTensorNCX]], Sequence[float]]] = adaptative_weighting, config: trw.layers.layer_config.LayerConfig = default_layer_config(dimensionality=None), return_intermediate: bool = False)

Bases: torch.nn.Module

Apply a deep supervision layer to help the flow of gradient reach top level layers.

This is mostly used for segmentation tasks.

Example

>>> import trw
>>> backbone = trw.layers.UNetBase(dim=2, input_channels=3, channels=[2, 4, 8], output_channels=2)
>>> deep_supervision = DeepSupervision(backbone, [3, 8, 16])
>>> i = torch.zeros([1, 3, 8, 16], dtype=torch.float32)
>>> t = torch.zeros([1, 1, 8, 16], dtype=torch.long)
>>> outputs = deep_supervision(i, t)
forward(self, x: torch.Tensor, target: torch.Tensor, latent: Optional[torch.Tensor] = None) Union[List[trw.train.outputs_trw.Output], Tuple[List[trw.train.outputs_trw.Output], List[torch.Tensor]]]
class trw.layers.BackboneDecoder(decoding_channels: Sequence[int], output_channels: int, backbone: trw.layers.convs.ModuleWithIntermediate, backbone_transverse_connections: Sequence[int], backbone_input_shape: trw.basic_typing.ShapeNCX, *, up_block_fn: trw.layers.unet_base.UpType = BlockUpResizeDeconvSkipConv, middle_block_fn: trw.layers.unet_base.MiddleType = partial(LatentConv, block=partial(BlockConvNormActivation, kernel_size=5)), output_block_fn: trw.layers.blocks.ConvBlockType = BlockConvNormActivation, latent_channels: Optional[int] = None, kernel_size: Optional[int] = 3, strides: Union[int, Sequence[int]] = 2, activation: Optional[Any] = None, config: trw.layers.layer_config.LayerConfig = default_layer_config(dimensionality=None))

Bases: torch.nn.Module, trw.layers.convs.ModuleWithIntermediate

U-net like model with backbone used as encoder.

Examples

>>> import trw
>>> encoder = trw.layers.convs_3d(1, channels=[64, 128, 256])
>>> segmenter = trw.layers.BackboneDecoder([256, 128, 64], 3, encoder, [0, 1, 2], [1, 1, 64, 64, 64])
forward_with_intermediate(self, x: torch.Tensor, latent: Optional[torch.Tensor] = None, **kwargs) List[torch.Tensor]
forward(self, x: torch.Tensor, latent: Optional[torch.Tensor] = None) torch.Tensor
Parameters
  • x – the input image

  • latent – a latent variable appended by the middle block

class trw.layers.EfficientNet(dimensionality: int, input_channels: int, output_channels: int, *, w_factor: float = 1, d_factor: float = 1, activation: Optional[trw.basic_typing.ModuleCreator] = Swish, base_widths=((32, 16), (16, 24), (24, 40), (40, 80), (80, 112), (112, 192), (192, 320), (320, 1280)), base_depths=(1, 2, 2, 3, 3, 4, 1), kernel_sizes=(3, 3, 5, 3, 5, 5, 3), strides=(1, 2, 2, 2, 1, 2, 1), config: trw.layers.layer_config.LayerConfig = default_layer_config(dimensionality=None))

Bases: torch.nn.Module, trw.layers.convs.ModuleWithIntermediate

Generic EfficientNet that takes in the width and depth scale factors and scales accordingly.

With default settings, it operates on 224x224 images.

References

[1] EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks https://arxiv.org/abs/1905.11946

forward_with_intermediate(self, x: torch.Tensor, **kwargs) Sequence[torch.Tensor]
feature_extractor(self, x: torch.Tensor) torch.Tensor
forward(self, x: torch.Tensor) torch.Tensor
class trw.layers.MBConvN(config: trw.layers.layer_config.LayerConfig, input_channels: int, output_channels: int, *, expansion_factor: int, kernel_size: Optional[trw.basic_typing.KernelSize] = 3, stride: Optional[trw.basic_typing.Stride] = None, r: int = 24, p: float = 0)

Bases: torch.nn.Module

MBConv with an expansion factor of N, plus squeeze-and-excitation

References

[1] “Searching for MobileNetV3”, https://arxiv.org/pdf/1905.02244.pdf

forward(self, x)
class trw.layers.PreActResNet(dimensionality: int, input_channels: int, output_channels: Optional[int], *, block=BlockResPreAct, num_blocks: Sequence[int] = (2, 2, 2, 2), strides: Sequence[trw.basic_typing.Stride] = (1, 2, 2, 2), channels: Sequence[int] = (64, 128, 256, 512), init_block_fn=partial(BlockConvNormActivation, kernel_size=3, stride=1, bias=False), output_block_fn=BlockPoolClassifier, config: trw.layers.layer_config.LayerConfig = default_layer_config(dimensionality=None, pool_type=PoolType.AvgPool))

Bases: torch.nn.Module, trw.layers.convs.ModuleWithIntermediate

Pre-activation Resnet model

Examples

>>> pre_act_resnet18 = PreActResNet(2, 3, 10)
>>> c = pre_act_resnet18(torch.zeros(10, 3, 32, 32))

References

[1] https://arxiv.org/pdf/1603.05027.pdf

Notes

The default pooling kernel has been adapted to fit CIFAR10 rather than imagenet image size (kernel size=4 instead of 7)

_make_layer(self, config, block, planes, num_blocks, stride)
forward_with_intermediate(self, x: torch.Tensor, **kwargs) List[torch.Tensor]
forward(self, x)
trw.layers.PreActResNet18
trw.layers.PreActResNet34
trw.layers.UNetAttention
class trw.layers.BlockNonLocal(config: trw.layers.layer_config.LayerConfig, input_channels: int, intermediate_channels: int, f_mapping_fn: Callable[[trw.layers.layer_config.LayerConfig, int, int], torch.nn.Module] = identity, g_mapping_fn: Callable[[trw.layers.layer_config.LayerConfig, int, int], torch.nn.Module] = identity, w_mapping_fn: Callable[[trw.layers.layer_config.LayerConfig, int, int], torch.nn.Module] = linear_embedding, normalize_output_fn: torch.nn.Module = nn.Softmax(dim=- 1))

Bases: torch.nn.Module

Non local block implementation of [1]

Defaults to dot product of each feature of each location and using a softmax layer to normalize the attention mask.

[1] https://openaccess.thecvf.com/content_cvpr_2018/papers/Wang_Non-Local_Neural_Networks_CVPR_2018_paper.pdf

Support n-d input data.

forward(self, x: trw.basic_typing.TorchTensorNCX, return_non_local_map: bool = False)
trw.layers.linear_embedding(config: trw.layers.layer_config.LayerConfig, input_channels: int, output_channels: int) torch.nn.Module