trw.layers.encoder_decoder_resnet
¶
Module Contents¶
Classes¶
Base class for all neural network modules. |
- class trw.layers.encoder_decoder_resnet.EncoderDecoderResnet(dimensionality: int, input_channels: int, output_channels: int, encoding_channels: Sequence[int], decoding_channels: Sequence[int], *, nb_residual_blocks: int = 9, convolution_kernel: int = 3, encoding_strides: trw.basic_typing.ConvStrides = 2, decoding_strides: trw.basic_typing.ConvStrides = 2, activation: Optional[trw.basic_typing.Activation] = None, encoding_block: trw.layers.blocks.ConvBlockType = BlockConvNormActivation, decoding_block: trw.layers.blocks.ConvTransposeBlockType = BlockDeconvNormActivation, init_block=partial(BlockConvNormActivation, kernel_size=7), middle_block: Any = BlockRes, out_block=partial(BlockConvNormActivation, kernel_size=7), config: trw.layers.layer_config.LayerConfig = default_layer_config(conv_kwargs={'padding': 'same', 'bias': False, 'padding_mode': 'reflect'}, deconv_kwargs={'padding': 'same', 'bias': False}, norm_type=NormType.BatchNorm, activation=nn.ReLU))¶
Bases:
torch.nn.Module
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.- Variables
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- forward(self, x: trw.basic_typing.TorchTensorNCX) trw.basic_typing.TorchTensorNCX ¶
- forward_with_intermediate(self, x: trw.basic_typing.TorchTensorNCX) List[trw.basic_typing.TorchTensorNCX] ¶