trw.layers.denses

Module Contents

Functions

denses(sizes: Sequence[int], dropout_probability: float = None, activation: Any = nn.ReLU, normalization_type: Optional[trw.layers.layer_config.NormType] = NormType.BatchNorm, last_layer_is_output: bool = False, with_flatten: bool = True, config: trw.layers.layer_config.LayerConfig = default_layer_config(dimensionality=None)) → torch.nn.Module

param sizes

the size of the linear layers_legacy. The format is [linear1_input, linear1_output, ..., linearN_output]

trw.layers.denses.denses(sizes: Sequence[int], dropout_probability: float = None, activation: Any = nn.ReLU, normalization_type: Optional[trw.layers.layer_config.NormType] = NormType.BatchNorm, last_layer_is_output: bool = False, with_flatten: bool = True, config: trw.layers.layer_config.LayerConfig = default_layer_config(dimensionality=None)) torch.nn.Module
Parameters
  • sizes – the size of the linear layers_legacy. The format is [linear1_input, linear1_output, …, linearN_output]

  • dropout_probability – the probability of the dropout layer. If None, no dropout layer is added.

  • activation – the activation to be used

  • normalization_type – the normalization to be used between dense layers_legacy. If None, no normalization added

  • last_layer_is_output – This must be set to True if the last layer of dense is actually an output. If the last layer is an output, we should not add batch norm, dropout or activation of the last nn.Linear

  • with_flatten – if True, the input will be flattened

  • config – defines the available operations

Returns

a nn.Module