trw.layers.layer_config

Module Contents

Classes

NormType

Representation of the normalization layer

PoolType

Representation of the pooling layer

DropoutType

Representation of the dropout types

LayerConfig

Generic configuration of the layers_legacy

Functions

create_dropout_fn(ops: trw.layers.ops_conversion.OpsConversion, dropout: Optional[DropoutType]) → Optional[trw.basic_typing.ModuleCreator]

Create the norm function from the ops and norm type

create_pool_fn(ops: trw.layers.ops_conversion.OpsConversion, pool: Optional[PoolType]) → Optional[trw.basic_typing.ModuleCreator]

Create the norm function from the ops and pool type

create_norm_fn(ops: trw.layers.ops_conversion.OpsConversion, norm: Optional[NormType]) → Optional[trw.basic_typing.ModuleCreator]

Create the norm function from the ops and norm type

default_layer_config(dimensionality: Optional[int] = None, norm_type: Optional[NormType] = NormType.BatchNorm, norm_kwargs: Dict = {}, pool_type: Optional[PoolType] = PoolType.MaxPool, pool_kwargs: Dict = {}, activation: Optional[Any] = nn.ReLU, activation_kwargs: Dict = {}, dropout_type: Optional[DropoutType] = DropoutType.Dropout1d, dropout_kwargs: Dict = {}, conv_kwargs: Dict = {'padding': 'same'}, deconv_kwargs: Dict = {'padding': 'same'}) → LayerConfig

Default layer configuration

class trw.layers.layer_config.NormType

Bases: enum.Enum

Representation of the normalization layer

BatchNorm = BatchNorm
InstanceNorm = InstanceNorm
GroupNorm = GroupNorm
SyncBatchNorm = SyncBatchNorm
LocalResponseNorm = LocalResponseNorm
class trw.layers.layer_config.PoolType

Bases: enum.Enum

Representation of the pooling layer

MaxPool = MaxPool
AvgPool = AvgPool
FractionalMaxPool = FractionalMaxPool
AdaptiveMaxPool = AdaptiveMaxPool
AdaptiveAvgPool = AdaptiveAvgPool
class trw.layers.layer_config.DropoutType

Bases: enum.Enum

Representation of the dropout types

Dropout1d = Dropout1d
Dropout = Dropout
AlphaDropout = AlphaDropout
trw.layers.layer_config.create_dropout_fn(ops: trw.layers.ops_conversion.OpsConversion, dropout: Optional[DropoutType]) Optional[trw.basic_typing.ModuleCreator]

Create the norm function from the ops and norm type

Parameters
  • ops – the operations to be used

  • dropout – the norm type to create

Returns

a normalization layer

trw.layers.layer_config.create_pool_fn(ops: trw.layers.ops_conversion.OpsConversion, pool: Optional[PoolType]) Optional[trw.basic_typing.ModuleCreator]

Create the norm function from the ops and pool type

Parameters
  • ops – the operations to be used

  • pool – the pool type to create

Returns

a normalization layer

trw.layers.layer_config.create_norm_fn(ops: trw.layers.ops_conversion.OpsConversion, norm: Optional[NormType]) Optional[trw.basic_typing.ModuleCreator]

Create the norm function from the ops and norm type

Parameters
  • ops – the operations to be used

  • norm – the norm type to create

Returns

a normalization layer

class trw.layers.layer_config.LayerConfig(ops: trw.layers.ops_conversion.OpsConversion, norm_type: Optional[NormType] = NormType.BatchNorm, norm_kwargs: Dict = {}, pool_type: Optional[PoolType] = PoolType.MaxPool, pool_kwargs: Dict = {}, activation: Optional[Any] = nn.ReLU, activation_kwargs: Dict = {}, dropout_type: Optional[DropoutType] = DropoutType.Dropout1d, dropout_kwargs: Dict = {}, conv_kwargs: Dict = {'padding': 'same'}, deconv_kwargs: Dict = {'padding': 'same'})

Generic configuration of the layers_legacy

set_dim(self, dimensionality: int)
trw.layers.layer_config.default_layer_config(dimensionality: Optional[int] = None, norm_type: Optional[NormType] = NormType.BatchNorm, norm_kwargs: Dict = {}, pool_type: Optional[PoolType] = PoolType.MaxPool, pool_kwargs: Dict = {}, activation: Optional[Any] = nn.ReLU, activation_kwargs: Dict = {}, dropout_type: Optional[DropoutType] = DropoutType.Dropout1d, dropout_kwargs: Dict = {}, conv_kwargs: Dict = {'padding': 'same'}, deconv_kwargs: Dict = {'padding': 'same'}) LayerConfig

Default layer configuration

Parameters
  • dimensionality – the number of dimensions of the input (without the N and C components)

  • norm_type – the type of normalization

  • norm_kwargs – additional normalization parameters

  • activation – the activation

  • activation_kwargs – additional activation parameters

  • dropout_kwargs – if not None, dropout parameters

  • conv_kwargs – additional parameters for the convolutional layer

  • deconv_kwargs – additional arguments for the transposed convolutional layer

  • pool_type – the type of pooling

  • pool_kwargs – additional parameters for the pooling layers_legacy

  • dropout_type – the type of dropout