trw.layers.autoencoder_convolutional

Module Contents

Classes

AutoencoderConvolutional

Convolutional autoencoder

class trw.layers.autoencoder_convolutional.AutoencoderConvolutional(dimensionality: int, input_channels: int, encoder_channels: Sequence[int], decoder_channels: Sequence[int], convolution_kernels: trw.basic_typing.ConvKernels = 5, encoder_strides: Union[trw.basic_typing.ConvStrides] = 1, decoder_strides: Union[trw.basic_typing.ConvStrides] = 2, pooling_size: Optional[trw.basic_typing.PoolingSizes] = 2, convolution_repeats: Union[int, Sequence[int]] = 1, activation: Optional[trw.basic_typing.Activation] = nn.ReLU, dropout_probability: Optional[float] = None, norm_type: trw.layers.layer_config.NormType = NormType.BatchNorm, norm_kwargs: Dict = {}, activation_kwargs: Dict = {}, last_layer_is_output: bool = False, force_decoded_size_same_as_input: bool = True, squash_function: Optional[Callable[[torch.Tensor], torch.Tensor]] = None, config: trw.layers.layer_config.LayerConfig = default_layer_config(dimensionality=None))

Bases: torch.nn.Module, trw.layers.convs.ModuleWithIntermediate

Convolutional autoencoder

Examples

Create an encoder taking 1 channel with [4, 8, 16] filters and a decoder taking as input 16 channels of 4x4 with [8, 4, 1] filters: >>> model = AutoencoderConvolutional(2, 1, [4, 8, 16], [8, 4, 1])

forward_simple(self, x: torch.Tensor) torch.Tensor
forward_with_intermediate(self, x: torch.Tensor, **kwargs) Tuple[torch.Tensor, torch.Tensor]
forward(self, x: torch.Tensor) torch.Tensor