trw.layers.blocks
¶
Module Contents¶
Classes¶
Base class for all neural network modules. |
|
Base class for all neural network modules. |
|
Base class for all neural network modules. |
|
Base class for all neural network modules. |
|
The standard approach of producing images with deconvolution — despite its successes! — |
|
Merge multiple layers (e.g., concatenate, sum...) |
|
Base class for all neural network modules. |
|
Base class for protocol classes. Protocol classes are defined as: |
|
Base class for protocol classes. Protocol classes are defined as: |
|
Squeeze-and-excitation block |
|
Original Residual block design |
|
Pre-activation residual block |
|
Base class for all neural network modules. |
Functions¶
|
Note conv_kwargs will be modified in-place. Make a copy before! |
- class trw.layers.blocks.BlockPool(config: trw.layers.layer_config.LayerConfig, kernel_size: Optional[trw.basic_typing.KernelSize] = 2)¶
Bases:
torch.nn.Module
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.- Variables
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- forward(self, x: torch.Tensor) torch.Tensor ¶
- trw.layers.blocks._posprocess_padding(config: trw.layers.layer_config.LayerConfig, conv_kwargs: Dict, ops: List[torch.nn.Module]) None ¶
Note
conv_kwargs will be modified in-place. Make a copy before!
- class trw.layers.blocks.BlockConv(config: trw.layers.layer_config.LayerConfig, input_channels: int, output_channels: int, *, kernel_size: Optional[trw.basic_typing.KernelSize] = None, padding: Optional[trw.basic_typing.Padding] = None, stride: Optional[trw.basic_typing.Stride] = None, padding_mode: Optional[str] = None, groups: int = 1, bias: Optional[bool] = None)¶
Bases:
torch.nn.Module
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.- Variables
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- forward(self, x: torch.Tensor) torch.Tensor ¶
- class trw.layers.blocks.BlockConvNormActivation(config: trw.layers.layer_config.LayerConfig, input_channels: int, output_channels: int, *, kernel_size: Optional[trw.basic_typing.KernelSize] = None, padding: Optional[trw.basic_typing.Padding] = None, stride: Optional[trw.basic_typing.Stride] = None, padding_mode: Optional[str] = None, groups: int = 1, bias: Optional[bool] = None)¶
Bases:
torch.nn.Module
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.- Variables
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- forward(self, x: torch.Tensor) torch.Tensor ¶
- class trw.layers.blocks.BlockDeconvNormActivation(config: trw.layers.layer_config.LayerConfig, input_channels: int, output_channels: int, *, kernel_size: Optional[trw.basic_typing.KernelSize] = None, padding: Optional[trw.basic_typing.Padding] = None, output_padding: Optional[Union[int, Sequence[int]]] = None, stride: Optional[trw.basic_typing.Stride] = None, padding_mode: Optional[str] = None)¶
Bases:
torch.nn.Module
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.- Variables
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- forward(self, x: torch.Tensor) torch.Tensor ¶
- class trw.layers.blocks.BlockUpsampleNnConvNormActivation(config: trw.layers.layer_config.LayerConfig, input_channels: int, output_channels: int, *, kernel_size: Optional[trw.basic_typing.KernelSize] = None, padding: Optional[trw.basic_typing.Padding] = None, output_padding: Optional[Union[int, Sequence[int]]] = None, stride: Optional[trw.basic_typing.Stride] = None, padding_mode: Optional[str] = None)¶
Bases:
torch.nn.Module
The standard approach of producing images with deconvolution — despite its successes! — has some conceptually simple issues that lead to checkerboard artifacts in produced images.
This is an alternative block using nearest neighbor upsampling + convolution.
- forward(self, x: torch.Tensor) torch.Tensor ¶
- class trw.layers.blocks.BlockMerge(config: trw.layers.layer_config.LayerConfig, layer_channels: Sequence[int], mode: typing_extensions.Literal[concatenation, sum] = 'concatenation')¶
Bases:
torch.nn.Module
Merge multiple layers (e.g., concatenate, sum…)
- get_output_channels(self)¶
- class trw.layers.blocks.BlockUpDeconvSkipConv(config: trw.layers.layer_config.LayerConfig, skip_channels: int, input_channels: int, output_channels: int, *, nb_repeats: int = 1, kernel_size: Optional[trw.basic_typing.KernelSize] = None, deconv_kernel_size: Optional[trw.basic_typing.KernelSize] = None, padding: Optional[trw.basic_typing.Padding] = None, output_padding: Optional[Union[int, Sequence[int]]] = None, deconv_block=BlockDeconvNormActivation, stride: Optional[trw.basic_typing.Stride] = None, merge_layer_fn=BlockMerge)¶
Bases:
torch.nn.Module
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.- Variables
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- forward(self, skip: torch.Tensor, previous: torch.Tensor) torch.Tensor ¶
- class trw.layers.blocks.ConvTransposeBlockType¶
Bases:
typing_extensions.Protocol
Base class for protocol classes. Protocol classes are defined as:
class Proto(Protocol): def meth(self) -> int: ...
Such classes are primarily used with static type checkers that recognize structural subtyping (static duck-typing), for example:
class C: def meth(self) -> int: return 0 def func(x: Proto) -> int: return x.meth() func(C()) # Passes static type check
See PEP 544 for details. Protocol classes decorated with @typing_extensions.runtime act as simple-minded runtime protocol that checks only the presence of given attributes, ignoring their type signatures.
Protocol classes can be generic, they are defined as:
class GenProto(Protocol[T]): def meth(self) -> T: ...
- __call__(self, config: trw.layers.layer_config.LayerConfig, input_channels: int, output_channels: int, *, kernel_size: Optional[trw.basic_typing.KernelSize] = None, padding: Optional[trw.basic_typing.Padding] = None, output_padding: Optional[Union[int, Sequence[int]]] = None, stride: Optional[trw.basic_typing.Stride] = None, padding_mode: Optional[str] = None) torch.nn.Module ¶
- class trw.layers.blocks.ConvBlockType¶
Bases:
typing_extensions.Protocol
Base class for protocol classes. Protocol classes are defined as:
class Proto(Protocol): def meth(self) -> int: ...
Such classes are primarily used with static type checkers that recognize structural subtyping (static duck-typing), for example:
class C: def meth(self) -> int: return 0 def func(x: Proto) -> int: return x.meth() func(C()) # Passes static type check
See PEP 544 for details. Protocol classes decorated with @typing_extensions.runtime act as simple-minded runtime protocol that checks only the presence of given attributes, ignoring their type signatures.
Protocol classes can be generic, they are defined as:
class GenProto(Protocol[T]): def meth(self) -> T: ...
- __call__(self, config: trw.layers.layer_config.LayerConfig, input_channels: int, output_channels: int, *, kernel_size: Optional[trw.basic_typing.KernelSize] = None, padding: Optional[trw.basic_typing.Padding] = None, stride: Optional[trw.basic_typing.Stride] = None, padding_mode: Optional[str] = None) torch.nn.Module ¶
- class trw.layers.blocks.BlockSqueezeExcite(config: trw.layers.layer_config.LayerConfig, input_channels: int, r: int = 24)¶
Bases:
torch.nn.Module
Squeeze-and-excitation block
References
[1] “Squeeze-and-Excitation Networks”, https://arxiv.org/pdf/1709.01507.pdf
- forward(self, x)¶
- class trw.layers.blocks.BlockRes(config: trw.layers.layer_config.LayerConfig, input_channels: int, *, kernel_size: Optional[trw.basic_typing.KernelSize] = None, padding: Optional[trw.basic_typing.Padding] = None, padding_mode: Optional[str] = None, base_block: ConvBlockType = BlockConvNormActivation)¶
Bases:
torch.nn.Module
Original Residual block design
References
[1] “Deep Residual Learning for Image Recognition”, https://arxiv.org/abs/1512.03385
- forward(self, x: trw.basic_typing.TorchTensorNCX) trw.basic_typing.TorchTensorNCX ¶
- class trw.layers.blocks.BlockResPreAct(config: trw.layers.layer_config.LayerConfig, input_channels: int, planes: int, stride: Optional[trw.basic_typing.Stride] = None, kernel_size: Optional[trw.basic_typing.KernelSize] = 3)¶
Bases:
torch.nn.Module
Pre-activation residual block
- forward(self, x)¶
- class trw.layers.blocks.BlockPoolClassifier(config: trw.layers.layer_config.LayerConfig, input_channels: int, output_channels: int, pooling_kernel=4)¶
Bases:
torch.nn.Module
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.- Variables
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- forward(self, x: torch.Tensor) torch.Tensor ¶