trw.layers.convs_2d
¶
Module Contents¶
Functions¶
|
|
- trw.layers.convs_2d.convs_2d(input_channels: int, channels: Sequence[int], convolution_kernels: trw.basic_typing.ConvKernels = 5, strides: trw.basic_typing.ConvStrides = 1, pooling_size: Optional[trw.basic_typing.PoolingSizes] = 2, convolution_repeats: Union[int, Sequence[int]] = 1, activation: trw.basic_typing.Activation = nn.ReLU, padding: trw.basic_typing.Paddings = 'same', with_flatten: bool = False, dropout_probability: Optional[float] = None, norm_type: Optional[trw.layers.layer_config.NormType] = None, norm_kwargs: Dict[str, Any] = {}, pool_kwargs: Dict[str, Any] = {}, last_layer_is_output: bool = False, conv_block_fn: trw.layers.blocks.ConvBlockType = BlockConvNormActivation, config: trw.layers.layer_config.LayerConfig = default_layer_config(dimensionality=None))¶
- Parameters
input_channels – the number of input channels
channels – the number of channels
convolution_kernels – for each convolution group, the kernel of the convolution
strides – for each convolution group, the stride of the convolution
pooling_size – the pooling size to be inserted after each convolution group
convolution_repeats – the number of repeats of a convolution
activation – the activation function
with_flatten – if True, the last output will be flattened
dropout_probability – if None, not dropout. Else the probability of dropout after each convolution
padding – ‘same’ will add padding so that convolution output as the same size as input
last_layer_is_output – if True, the last convolution will NOT have activation, dropout, batch norm, LRN
norm_type – the normalization layer (e.g., BatchNorm)
norm_kwargs – additional arguments for normalization
pool_kwargs – additional argument for pool
conv_block_fn – the base blocks convolutional
config – defines the allowed operations