trw.layers.efficient_net

Module Contents

Classes

DropSample

Drops each sample in x with probability p during training

MBConvN

MBConv with an expansion factor of N, plus squeeze-and-excitation

EfficientNet

Generic EfficientNet that takes in the width and depth scale factors and scales accordingly.

Functions

create_stage(config, input_channels, output_channels, num_layers, layer_type=MBConv6, kernel_size=3, stride=1, r=24, p=0)

Creates a Sequential consisting of [num_layers] layer_type

scale_width(w, w_factor)

Scales width given a scale factor

Attributes

MBConv1

MBConv6

EfficientNetB0

EfficientNetB1

EfficientNetB2

EfficientNetB3

EfficientNetB5

EfficientNetB6

EfficientNetB7

class trw.layers.efficient_net.DropSample(p: float = 0)

Bases: torch.nn.Module

Drops each sample in x with probability p during training

forward(self, x: trw.basic_typing.TorchTensorNCX) trw.basic_typing.TorchTensorNCX
class trw.layers.efficient_net.MBConvN(config: trw.layers.layer_config.LayerConfig, input_channels: int, output_channels: int, *, expansion_factor: int, kernel_size: Optional[trw.basic_typing.KernelSize] = 3, stride: Optional[trw.basic_typing.Stride] = None, r: int = 24, p: float = 0)

Bases: torch.nn.Module

MBConv with an expansion factor of N, plus squeeze-and-excitation

References

[1] “Searching for MobileNetV3”, https://arxiv.org/pdf/1905.02244.pdf

forward(self, x)
trw.layers.efficient_net.MBConv1
trw.layers.efficient_net.MBConv6
trw.layers.efficient_net.create_stage(config, input_channels, output_channels, num_layers, layer_type=MBConv6, kernel_size=3, stride=1, r=24, p=0)

Creates a Sequential consisting of [num_layers] layer_type

trw.layers.efficient_net.scale_width(w, w_factor)

Scales width given a scale factor

class trw.layers.efficient_net.EfficientNet(dimensionality: int, input_channels: int, output_channels: int, *, w_factor: float = 1, d_factor: float = 1, activation: Optional[trw.basic_typing.ModuleCreator] = Swish, base_widths=((32, 16), (16, 24), (24, 40), (40, 80), (80, 112), (112, 192), (192, 320), (320, 1280)), base_depths=(1, 2, 2, 3, 3, 4, 1), kernel_sizes=(3, 3, 5, 3, 5, 5, 3), strides=(1, 2, 2, 2, 1, 2, 1), config: trw.layers.layer_config.LayerConfig = default_layer_config(dimensionality=None))

Bases: torch.nn.Module, trw.layers.convs.ModuleWithIntermediate

Generic EfficientNet that takes in the width and depth scale factors and scales accordingly.

With default settings, it operates on 224x224 images.

References

[1] EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks https://arxiv.org/abs/1905.11946

forward_with_intermediate(self, x: torch.Tensor, **kwargs) Sequence[torch.Tensor]
feature_extractor(self, x: torch.Tensor) torch.Tensor
forward(self, x: torch.Tensor) torch.Tensor
trw.layers.efficient_net.EfficientNetB0
trw.layers.efficient_net.EfficientNetB1
trw.layers.efficient_net.EfficientNetB2
trw.layers.efficient_net.EfficientNetB3
trw.layers.efficient_net.EfficientNetB5
trw.layers.efficient_net.EfficientNetB6
trw.layers.efficient_net.EfficientNetB7