trw.transforms

This module is dedicated to data augmentations. In particular we strive to have a numpy and pytorch implementation for each augmentation so that we could if perform it on GPU

Transforms are designed to work for n-dimensional data.

Submodules

Package Contents

Classes

SpatialInfo

Represent a the geometric space of a n-dimensional (2D or 3D) volume.

Transform

Abstraction of a batch transform

TransformBatchWithCriteria

Helper function to apply a given transform function on features that satisfy a criteria

TransformRandomCropPad

Add padding on a numpy array of samples and random crop to original size

TransformRandomFlip

Randomly flip the axis of selected features

TransformRandomCutout

Randomly flip the axis of selected features

TransformResize

Resize a tensor to a fixed size

TransformNormalizeIntensity

Normalize a tensor image with mean and standard deviation.

TransformCompose

Sequentially apply a list of transformations

TransformAffine

Transform an image using a random affine (2D or 3D) transformation.

TransformCast

Cast tensors to a specified type.

TransformRandomCropResize

Randomly crop a tensor and resize to its original shape.

TransformResizeModuloCropPad

Resize tensors by padding or cropping the tensor so its shape is a multiple of a multiple_of.

TransformResample

Resample a tensor with spatial information (e.g., a 3D volume with origin and spacing)

TransformOneOf

Randomly select a transform among a set of transforms and apply it

TransformRandomDeformation

Transform an image using a random deformation field.

TransformSqueeze

Squeeze a dimension of a tensor (i.e., remove one dimension of size 1 of a specifed axis)

TransformUnsqueeze

Unsqueeze a dimension of a tensor.

TransformMoveToDevice

Move a tensor to a specified device.

Functions

transform_batch_random_crop(array: trw.basic_typing.TensorNCX, crop_shape: Sequence[Union[int, None]], offsets: Sequence[Sequence[int]] = None, return_offsets: bool = False) → Union[trw.basic_typing.TensorNCX, Tuple[trw.basic_typing.TensorNCX, Sequence[Sequence[int]]]]

Randomly crop a numpy array of samples given a target size. This works for an arbitrary number of dimensions

batch_crop(images: trw.basic_typing.TensorNCX, min_index: Sequence[int], max_index_exclusive: Sequence[int]) → trw.basic_typing.TensorNCX

Crop an image

batch_pad_numpy(array: trw.basic_typing.NumpyTensorNCX, padding: trw.basic_typing.ShapeCX, mode: str = 'edge', constant_value: trw.basic_typing.Numeric = 0)

Add padding on a numpy array of samples. This works for an arbitrary number of dimensions

batch_pad_torch(array: trw.basic_typing.TorchTensorNCX, padding: trw.basic_typing.ShapeCX, mode: str = 'edge', constant_value: trw.basic_typing.Numeric = 0)

Add padding on a numpy array of samples. This works for an arbitrary number of dimensions

flip(array: trw.basic_typing.Tensor, axis: int) → trw.basic_typing.Tensor

Flip an axis of an array

copy(array: trw.basic_typing.Tensor) → trw.basic_typing.Tensor

Copy an array

cutout(image: trw.basic_typing.TensorNCX, cutout_size: Union[trw.basic_typing.ShapeCX, Callable[[], trw.basic_typing.ShapeCX]], cutout_value_fn: CutOutType) → None

Remove a part of the image randomly

cutout_random_ui8_torch(image: torch.Tensor, min_value: int = 0, max_value: int = 255) → None

Replace the image content as a constant value

cutout_value_fn_constant(image: trw.basic_typing.Tensor, value: trw.basic_typing.Numeric) → None

Replace all image as a constant value

cutout_random_size(min_size: Sequence[int], max_size: Sequence[int]) → List[int]

Return a random size within the specified bounds.

resize

stack

normalize

renormalize(data, desired_mean, desired_std, current_mean=None, current_std=None)

Transform the data so that it has desired mean and standard deviation element wise

resample_3d(volume: trw.basic_typing.TensorX, np_volume_spacing: trw.basic_typing.Length, np_volume_origin: trw.basic_typing.Length, min_bb_mm: trw.basic_typing.Length, max_bb_mm: trw.basic_typing.Length, resampled_spacing: trw.basic_typing.Length, interpolation_mode: typing_extensions.Literal[linear, nearest] = 'linear', padding_mode: typing_extensions.Literal[zeros, border, reflection] = 'zeros', align_corners=False) → trw.basic_typing.TensorX

resample_spatial_info(geometry_moving: trw.transforms.spatial_info.SpatialInfo, moving_volume: trw.basic_typing.TorchTensorNCX, geometry_fixed: trw.transforms.spatial_info.SpatialInfo, tfm: torch.Tensor, interpolation: typing_extensions.Literal[linear, nearest] = 'linear', padding_mode: typing_extensions.Literal[zeros, border, reflection] = 'zeros', align_corners: bool = False) → trw.basic_typing.TorchTensorNCX

Apply an affine transformation to a given (moving) volume into a given geometry (fixed)

affine_grid_fixed_to_moving(geometry_moving: trw.transforms.spatial_info.SpatialInfo, geometry_fixed: trw.transforms.spatial_info.SpatialInfo, tfm: torch.Tensor, align_corners: bool = False) → torch.Tensor

Calculate a grid that maps a fixed geometry to a transformed moving geometry.

deform_image_random(moving_volumes: List[trw.basic_typing.TorchTensorNCX], control_points: Union[int, Sequence[int]], max_displacement: Optional[Union[float, Sequence[float]]] = None, geometry: Optional[trw.transforms.spatial_info.SpatialInfo] = None, interpolation: typing_extensions.Literal[linear, nearest] = 'linear', padding_mode: typing_extensions.Literal[zeros, border, reflection] = 'zeros', gaussian_filter_sigma: Optional[float] = None, align_corners: bool = False) → List[trw.basic_typing.TorchTensorNCX]

Non linearly deform an image based on a grid of control points.

random_grid_using_control_points(shape: trw.basic_typing.ShapeNX, control_points: Union[int, Sequence[int]], max_displacement: Optional[Union[float, Sequence[float]]] = None, geometry_moving: Optional[trw.transforms.spatial_info.SpatialInfo] = None, tfm: Optional[torch.Tensor] = None, geometry_fixed: Optional[trw.transforms.spatial_info.SpatialInfo] = None, gaussian_filter_sigma: Optional[float] = None, align_corners: bool = False) → torch.Tensor

Generate random deformation grid (one for each sample)

affine_transformation_translation(t: Sequence[float]) → torch.Tensor

Defines an affine translation for 2D or 3D data

affine_transformation_rotation2d(angle_radian: float) → torch.Tensor

Defines a 2D rotation transform

affine_transformation_scale(s: Sequence[float]) → torch.Tensor

Defines an affine scaling transformation (2D or 3D)

affine_transform(images: trw.basic_typing.TorchTensorNCX, affine_matrices: torch.Tensor, interpolation: str = 'bilinear', padding_mode: str = 'border', align_corners: bool = None) → trw.basic_typing.TorchTensorNCX

Transform a series of images with a series of affine transformations

to_voxel_space_transform(matrix: torch.Tensor, image_shape: trw.basic_typing.ShapeCX) → torch.Tensor

Express the affine transformation in image space coordinate in range (-1, 1)

apply_homogeneous_affine_transform(transform: torch.Tensor, position: torch.Tensor)

Apply an homogeneous affine transform (4x4 for 3D or 3x3 for 2D) to a position

apply_homogeneous_affine_transform_zyx(transform: torch.Tensor, position_zyx: torch.Tensor)

Apply an homogeneous affine transform (4x4 for 3D or 3x3 for 2D) to a position

criteria_feature_name(batch: trw.basic_typing.Batch, feature_names: Sequence[str]) → Sequence[str]

Return list of feature names which belong to a given set of names

criteria_is_array_4_or_above(batch: trw.basic_typing.Batch) → Sequence[str]

Return list of feature names which is a numpy or torch array dim >= 4, typically all

criteria_is_array_n_or_above(batch: trw.basic_typing.Batch, dim: int) → Sequence[str]

Return True if the feature is a numpy or torch array dim >= dim

criteria_is_tensor(batch: trw.basic_typing.Batch) → Sequence[str]

Return list of feature names which is torch.Tensor

random_fixed_geometry_within_geometries(geometries: Dict[str, trw.transforms.spatial_info.SpatialInfo], fixed_geometry_shape: trw.basic_typing.ShapeX, fixed_geometry_spacing: trw.basic_typing.Length, geometry_selector: Callable[[Sequence[trw.transforms.spatial_info.SpatialInfo]], trw.transforms.spatial_info.SpatialInfo] = find_largest_geometry)

Place randomly a fixed geometry within the largest available geometry.

find_largest_geometry(geometries: Sequence[trw.transforms.spatial_info.SpatialInfo]) → trw.transforms.spatial_info.SpatialInfo

trw.transforms.transform_batch_random_crop(array: trw.basic_typing.TensorNCX, crop_shape: Sequence[Union[int, None]], offsets: Sequence[Sequence[int]] = None, return_offsets: bool = False) Union[trw.basic_typing.TensorNCX, Tuple[trw.basic_typing.TensorNCX, Sequence[Sequence[int]]]]

Randomly crop a numpy array of samples given a target size. This works for an arbitrary number of dimensions

Parameters
  • array – a numpy or Torch array. Samples are stored in the first dimension

  • crop_shape – a sequence of size len(array.shape)-1 indicating the shape of the crop. If None in one of the element of the shape, take the whole dimension

  • offsets – if None, offsets will be randomly created to crop with crop_shape, else an array indicating the crop position for each sample

  • return_offsets – if True, returns a tuple (cropped array, offsets)

Returns

a cropped array and optionally the crop positions

trw.transforms.batch_crop(images: trw.basic_typing.TensorNCX, min_index: Sequence[int], max_index_exclusive: Sequence[int]) trw.basic_typing.TensorNCX

Crop an image :param images: images with shape [N * …] :param min_index: a sequence of size len(array.shape)-1 indicating cropping start :param max_index_exclusive: a sequence of size len(array.shape)-1 indicating cropping end (excluded)

Returns

a cropped images

trw.transforms.batch_pad_numpy(array: trw.basic_typing.NumpyTensorNCX, padding: trw.basic_typing.ShapeCX, mode: str = 'edge', constant_value: trw.basic_typing.Numeric = 0)

Add padding on a numpy array of samples. This works for an arbitrary number of dimensions

Parameters
  • array – a numpy array. Samples are stored in the first dimension

  • padding – a sequence of size len(array.shape)-1 indicating the width of the padding to be added at the beginning and at the end of each dimension (except for dimension 0)

  • modenumpy.pad mode

  • constant_value – constant used if mode == constant

Returns

a padded array

trw.transforms.batch_pad_torch(array: trw.basic_typing.TorchTensorNCX, padding: trw.basic_typing.ShapeCX, mode: str = 'edge', constant_value: trw.basic_typing.Numeric = 0)

Add padding on a numpy array of samples. This works for an arbitrary number of dimensions

This function mimics the API of transform_batch_pad_numpy so they can be easily interchanged.

Parameters
  • array – a Torch array. Samples are stored in the first dimension

  • padding – a sequence of size len(array.shape)-1 indicating the width of the padding to be added at the beginning and at the end of each dimension (except for dimension 0)

  • modenumpy.pad mode. Currently supported are (‘constant’, ‘edge’, ‘symmetric’)

  • constant_value – constant used if mode == constant

Returns

a padded array

trw.transforms.flip(array: trw.basic_typing.Tensor, axis: int) trw.basic_typing.Tensor

Flip an axis of an array

Parameters
  • array – a numpy.ndarray or torch.Tensor n-dimensional array

  • axis – the xis to flip

Returns

an array with specified axis flipped

trw.transforms.copy(array: trw.basic_typing.Tensor) trw.basic_typing.Tensor

Copy an array

Parameters

array – a numpy.ndarray or torch.Tensor n-dimensional array

Returns

an array with specified axis flipped

trw.transforms.cutout(image: trw.basic_typing.TensorNCX, cutout_size: Union[trw.basic_typing.ShapeCX, Callable[[], trw.basic_typing.ShapeCX]], cutout_value_fn: CutOutType) None

Remove a part of the image randomly

Parameters
  • image – a numpy.ndarray or torch.Tensor n-dimensional array. Samples are stored on axis 0

  • cutout_size – the cutout_size of the regions to be occluded or a callable function taking no argument and returning a tuple representing the shape of the region to be occluded (without the N component)

  • cutout_value_fn – the function value used for occlusion. Must take as argument image and modify directly the image

Returns

None

trw.transforms.cutout_random_ui8_torch(image: torch.Tensor, min_value: int = 0, max_value: int = 255) None

Replace the image content as a constant value

trw.transforms.cutout_value_fn_constant(image: trw.basic_typing.Tensor, value: trw.basic_typing.Numeric) None

Replace all image as a constant value

trw.transforms.cutout_random_size(min_size: Sequence[int], max_size: Sequence[int]) List[int]

Return a random size within the specified bounds.

Parameters
  • min_size – a sequence representing the min size to be generated

  • max_size – a sequence representing the max size (inclusive) to be generated

Returns

a tuple representing the size

trw.transforms.resize(array: trw.basic_typing.TensorNCX, size: trw.basic_typing.ShapeX, mode: typing_extensions.Literal[nearest, linear] = 'linear') trw.basic_typing.TensorNCX

Resize the array

Parameters
  • array – a N-dimensional tensor, representing 1D to 3D data (3 to 5 dimensional data with dim 0 for the samples and dim 1 for filters)

  • size – a (N-2) list to which the array will be upsampled or downsampled

  • mode – string among (‘nearest’, ‘linear’) specifying the resampling method

Returns

a resized N-dimensional tensor

trw.transforms.stack(sequence, axis=0)

stack an array

Parameters
  • sequence – a numpy.ndarray or torch.Tensor n-dimensional array

  • axis – the xis to flip

Returns

an array stacked

trw.transforms.normalize(array: trw.basic_typing.TensorNCX, mean: Sequence[float], std: Sequence[float]) trw.basic_typing.TensorNCX

Normalize a tensor image with mean and standard deviation.

Given mean: (M1,…,Mn) and std: (S1,..,Sn) for n channels, this transform will normalize each channel of the input torch.Tensor, input[channel] = (input[channel] - mean[channel]) / std[channel]

Parameters
  • array – the torch array to normalize. Expected layout is (sample, filter, d0, … dN)

  • mean – a N-dimensional sequence

  • std – a N-dimensional sequence

Returns

A normalized tensor such that the mean is 0 and std is 1

trw.transforms.renormalize(data, desired_mean, desired_std, current_mean=None, current_std=None)

Transform the data so that it has desired mean and standard deviation element wise

Parameters
  • data – a torch or numpy array

  • desired_mean – the mean to transform data to

  • desired_std – the std to transform data to

  • current_mean – if the mean if known, do not recalculate it (e.g., training mean to be used in validation split)

  • current_std – if the std if known, do not recalculate it (e.g., training std to be used in validation split)

Returns

a data with mean desired_mean and std desired_std

trw.transforms.resample_3d(volume: trw.basic_typing.TensorX, np_volume_spacing: trw.basic_typing.Length, np_volume_origin: trw.basic_typing.Length, min_bb_mm: trw.basic_typing.Length, max_bb_mm: trw.basic_typing.Length, resampled_spacing: trw.basic_typing.Length, interpolation_mode: typing_extensions.Literal[linear, nearest] = 'linear', padding_mode: typing_extensions.Literal[zeros, border, reflection] = 'zeros', align_corners=False) trw.basic_typing.TensorX
trw.transforms.resample_spatial_info(geometry_moving: trw.transforms.spatial_info.SpatialInfo, moving_volume: trw.basic_typing.TorchTensorNCX, geometry_fixed: trw.transforms.spatial_info.SpatialInfo, tfm: torch.Tensor, interpolation: typing_extensions.Literal[linear, nearest] = 'linear', padding_mode: typing_extensions.Literal[zeros, border, reflection] = 'zeros', align_corners: bool = False) trw.basic_typing.TorchTensorNCX

Apply an affine transformation to a given (moving) volume into a given geometry (fixed)

Parameters
  • geometry_moving – Defines the geometric space of the moving volume

  • moving_volume – the moving volume (2D or 3D)

  • geometry_fixed – define the geometric space to be resampled

  • tfm – an (dim + 1) x (dim + 1) affine transformation matrix that moves the moving volume

  • interpolation – how to interpolate the moving volume

  • padding_mode – defines how to handle missing (moving) data

  • align_corners – specifies how to align the voxel grids

Returns

a volume with geometric space geometry_fixed. The content is the moving_volume moved by tfm

Notes

the gradient will be propagated through the transform

trw.transforms.affine_grid_fixed_to_moving(geometry_moving: trw.transforms.spatial_info.SpatialInfo, geometry_fixed: trw.transforms.spatial_info.SpatialInfo, tfm: torch.Tensor, align_corners: bool = False) torch.Tensor

Calculate a grid that maps a fixed geometry to a transformed moving geometry.

This can be used to resampled a volume to a different geometry / transformation.

Parameters
  • geometry_moving – the moving geometry. This geometry will have an affine transformation tfm applied (e.g., translation, scaling)

  • geometry_fixed – the fixed geometry

  • tfm – a linear transformation that will move moving_volume

  • align_corners – should be false

Returns

a N x D x C x H x W x dim grid

trw.transforms.deform_image_random(moving_volumes: List[trw.basic_typing.TorchTensorNCX], control_points: Union[int, Sequence[int]], max_displacement: Optional[Union[float, Sequence[float]]] = None, geometry: Optional[trw.transforms.spatial_info.SpatialInfo] = None, interpolation: typing_extensions.Literal[linear, nearest] = 'linear', padding_mode: typing_extensions.Literal[zeros, border, reflection] = 'zeros', gaussian_filter_sigma: Optional[float] = None, align_corners: bool = False) List[trw.basic_typing.TorchTensorNCX]

Non linearly deform an image based on a grid of control points.

The grid of control points is first uniformly mapped to span the whole image, then the control point position will be randomized using max_displacement. To avoid artifacts at the image boundary, a control point is added with 0 max displacement all around the image.

The gradient can be back-propagated through this transform.

Notes

The deformation field’s max_displacement will not rotate according to geometry_fixed but instead is axis aligned.

Parameters
  • moving_volumes – a list of moving volumes. All volumes will be deformed using the same deformation field

  • control_points – the control points spread on the image at regularly spaced intervals with random max_displacement magnitude

  • max_displacement – specify the maximum displacement of a control point. Range [-1..1]. If None, use the moving volume shape and number of control points to calculate appropriate small deformation field

  • geometry – defines the geometry of an image. In particular to handle non-isotropic spacing

  • interpolation – the interpolation of the image with displacement field

  • padding_mode – how to handle data outside the volume geometry

  • align_corners – should be False. The (0, 0) is the center of a voxel

  • gaussian_filter_sigma – if not None, smooth the deformation field using a gaussian filter. The smoothing is done in the control point space

Returns

a deformed image

trw.transforms.random_grid_using_control_points(shape: trw.basic_typing.ShapeNX, control_points: Union[int, Sequence[int]], max_displacement: Optional[Union[float, Sequence[float]]] = None, geometry_moving: Optional[trw.transforms.spatial_info.SpatialInfo] = None, tfm: Optional[torch.Tensor] = None, geometry_fixed: Optional[trw.transforms.spatial_info.SpatialInfo] = None, gaussian_filter_sigma: Optional[float] = None, align_corners: bool = False) torch.Tensor

Generate random deformation grid (one for each sample) based on the number of control points and maximum displacement of the control points.

This is done by decomposing the affine (grid) and deformable components.

The gradient can be back-propagated through this transform.

Notes

The deformation field’s max_displacement will not rotate according to geometry_fixed but will be axis aligned.

Parameters
  • control_points – the control points spread on the image at regularly spaced intervals with random max_displacement magnitude

  • max_displacement – specify the maximum displacement of a control point. Range [-1..1]

  • geometry_moving – defines the geometry of an image. In particular to handle non-isotropic spacing

  • align_corners – should be False. The (0, 0) is the center of a voxel

  • shape – the shape of the moving geometry. Must match the geometry_moving if specified

  • geometry_moving – geometry of the moving object. If None, default to a geometry of spacing 1 and origin 0

  • geometry_fixed – geometry output (dictate the final geometry). If None, use the same as the geometry_moving

  • tfm – the transformation to be applied to the geometry_moving

  • gaussian_filter_sigma – if not None, smooth the deformation field using a gaussian filter. The smoothing is done in the control point space

Returns

N * X * dim displacement field

trw.transforms.affine_transformation_translation(t: Sequence[float]) torch.Tensor

Defines an affine translation for 2D or 3D data

For a 3D transformation, returns a 4x4 matrix:

1 0 0 X |
M = | 0 1 0 Y |
0 0 1 Z |
0 0 0 1 |
Parameters

t – a (X, Y, Z) or (X, Y) tuple

Returns

a transformation matrix

trw.transforms.affine_transformation_rotation2d(angle_radian: float) torch.Tensor

Defines a 2D rotation transform :param angle_radian: the rotation angle in radian

Returns

a 3x3 transformation matrix

trw.transforms.affine_transformation_scale(s: Sequence[float]) torch.Tensor

Defines an affine scaling transformation (2D or 3D)

For a 3D transformation, returns 4x4 matrix:

Sx 0 0 0 |
M = | 0 Sy 0 0 |
0 0 Sz 0 |
0 0 0 1 |
Parameters

s – a (Sx, Sy, Sz) or (Sx, Sy) tuple

Returns

a transformation matrix

trw.transforms.affine_transform(images: trw.basic_typing.TorchTensorNCX, affine_matrices: torch.Tensor, interpolation: str = 'bilinear', padding_mode: str = 'border', align_corners: bool = None) trw.basic_typing.TorchTensorNCX

Transform a series of images with a series of affine transformations

Parameters
  • images – 3D or 2D images with shape [N, C, D, H, W] or [N, C, H, W] respectively

  • affine_matrices – a list of size N of 3x4 or 2x3 matrices (see trw.transforms.to_voxel_space_transform

  • interpolation – the interpolation method. Can be nearest or bilinear

  • padding_mode – the padding to be used for resampled voxels outside the image. Can be 'zeros' | 'border' | 'reflection'

  • align_corners – Geometrically, we consider the pixels of the input as squares rather than points.

Returns

images transformed

trw.transforms.to_voxel_space_transform(matrix: torch.Tensor, image_shape: trw.basic_typing.ShapeCX) torch.Tensor

Express the affine transformation in image space coordinate in range (-1, 1)

Parameters
  • matrix – a transformation matrix for 2D or 3D transformation

  • image_shape – the transformation matrix will be mapped to the image space coordinate system (i.e., the matrix is expressed as “voxel”). Should be [C, D, H, W] or [C, H, W] matrix (no N component)

Returns

a 2x3 or 3x4 transform

See:

this is often used with trw.transforms.affine_transform or torch.nn.functional.affine_grid

trw.transforms.apply_homogeneous_affine_transform(transform: torch.Tensor, position: torch.Tensor)

Apply an homogeneous affine transform (4x4 for 3D or 3x3 for 2D) to a position

Parameters
  • transform – an homogeneous affine transformation

  • position – XY(Z) position

Returns

a transformed position XY(Z)

trw.transforms.apply_homogeneous_affine_transform_zyx(transform: torch.Tensor, position_zyx: torch.Tensor)

Apply an homogeneous affine transform (4x4 for 3D or 3x3 for 2D) to a position

Parameters
  • transform – an homogeneous affine transformation

  • position_zyx – (Z)YX position

Returns

a transformed position (Z)YX

class trw.transforms.SpatialInfo(shape: trw.basic_typing.ShapeX, patient_scale_transform: Optional[torch.Tensor] = None, origin: Optional[trw.basic_typing.Length] = None, spacing: Optional[trw.basic_typing.Length] = None)

Represent a the geometric space of a n-dimensional (2D or 3D) volume.

Concepts: patient scale transform

we often need to work with data in a given geometric space. This can be achieved by mapping voxel indices of a tensor to given geometric space by applying a linear transform on the location of the voxel to express.

This patient transform can be decomposed as multiple linear transforms such as translation, rotation, zoom and shearing. SpatialInfo will encode its geometric space as PST = Translation * (RotationZ *) RotationY * RotationZ * Spacing

The matrix is a homogeneous transformation matrix:

```
RXx RYx RZx Tx |
PST = | RXy RYy RZy Ty |
RXz RYz RZz Tz |
0 0 0 1 |

```

with (RX, RY, RZ) the basis of the geometric space. The spacing is defined as (||RX||^2, ||RY||^2, ||RZ||^2).

Notes

  • function will require calls argument names (*), since the xyz / zyx is cumbersome and probably both conventions will need to be supported in the future

We use arbitrary unit millimeter unit for all the attributes.

set_patient_scale_transform(self, patient_scale_transform: torch.Tensor) None
property spacing(self) numpy.ndarray

Calculate the spacing of the PST. Return the components as ZYX order.

property origin(self) numpy.ndarray

Return the origin expressed in world space (expressed as ZYX order).

property center(self) numpy.ndarray

Return the center in world space (expressed as ZYX order).

index_to_position(self, *, index_zyx: torch.Tensor) torch.Tensor

Map an index to world space

Parameters

index_zyx – coordinate in index space

Returns

position in world space (Z)YX

position_to_index(self, *, position_zyx: torch.Tensor) torch.Tensor

Map world space coordinate to an index

Parameters

position_zyx – position in world space

Returns

coordinate in index space (Z)YX

sub_geometry(self, *, start_index_zyx: torch.Tensor, end_index_zyx_inclusive: torch.Tensor)

Create a sub-geometry from min and max indices

Parameters
  • start_index_zyx – starting index

  • end_index_zyx_inclusive – ending index (inclusive)

Returns

a new Spatial info representing this sub-geometry

class trw.transforms.Transform

Abstraction of a batch transform

abstract __call__(self, batch: trw.basic_typing.Batch) trw.basic_typing.Batch
class trw.transforms.TransformBatchWithCriteria(criteria_fn: CriteriaFn, transform_fn: Callable[[Sequence[str], trw.basic_typing.Batch], trw.basic_typing.Batch])

Bases: Transform

Helper function to apply a given transform function on features that satisfy a criteria

__call__(self, batch: trw.basic_typing.Batch) trw.basic_typing.Batch
trw.transforms.criteria_feature_name(batch: trw.basic_typing.Batch, feature_names: Sequence[str]) Sequence[str]

Return list of feature names which belong to a given set of names

trw.transforms.criteria_is_array_4_or_above(batch: trw.basic_typing.Batch) Sequence[str]

Return list of feature names which is a numpy or torch array dim >= 4, typically all n-d images, n >= 2

trw.transforms.criteria_is_array_n_or_above(batch: trw.basic_typing.Batch, dim: int) Sequence[str]

Return True if the feature is a numpy or torch array dim >= dim

trw.transforms.criteria_is_tensor(batch: trw.basic_typing.Batch) Sequence[str]

Return list of feature names which is torch.Tensor

class trw.transforms.TransformRandomCropPad(padding: Optional[trw.basic_typing.ShapeCX], criteria_fn: Optional[trw.transforms.transforms.CriteriaFn] = None, mode: typing_extensions.Literal[constant, edge, symmetric] = 'constant', constant_value: trw.basic_typing.Numeric = 0, shape: Optional[trw.basic_typing.ShapeCX] = None)

Bases: trw.transforms.transforms.TransformBatchWithCriteria

Add padding on a numpy array of samples and random crop to original size

Parameters
  • padding – a sequence of size len(array.shape)-1 indicating the width of the padding to be added at the beginning and at the end of each dimension (except for dimension 0). If None, no padding added

  • criteria_fn – function applied on each feature. If satisfied, the feature will be transformed, if not the original feature is returned

  • modenumpy.pad mode. Currently supported are (‘constant’, ‘edge’, ‘symmetric’)

  • shape – the size of the cropped image. If None, same size as input image

Returns

a randomly cropped batch

class trw.transforms.TransformRandomFlip(axis: int, flip_probability: float = 0.5, criteria_fn: Optional[trw.transforms.transforms.CriteriaFn] = None)

Bases: trw.transforms.transforms.TransformBatchWithCriteria

Randomly flip the axis of selected features

class trw.transforms.TransformRandomCutout(cutout_size: Union[trw.basic_typing.ShapeCX, Callable[[], trw.basic_typing.ShapeCX]], criteria_fn: Optional[trw.transforms.transforms.CriteriaFn] = None, probability: float = 1.0, cutout_value_fn: Callable[[trw.basic_typing.TensorNCX], None] = functools.partial(cutout_function.cutout_value_fn_constant, value=0))

Bases: trw.transforms.transforms.TransformBatchWithCriteria

Randomly flip the axis of selected features

class trw.transforms.TransformResize(size: trw.basic_typing.ShapeX, criteria_fn: Optional[trw.transforms.transforms.CriteriaFn] = None, mode='linear')

Bases: trw.transforms.transforms.TransformBatchWithCriteria

Resize a tensor to a fixed size

class trw.transforms.TransformNormalizeIntensity(mean: Sequence[numbers.Number], std: Sequence[numbers.Number], criteria_fn: Optional[trw.transforms.transforms.CriteriaFn] = None)

Bases: trw.transforms.transforms.TransformBatchWithCriteria

Normalize a tensor image with mean and standard deviation.

Given mean: (M1,…,Mn) and std: (S1,..,Sn) for n channels, this transform will normalize each channel of the input torch.Tensor, input[channel] = (input[channel] - mean[channel]) / std[channel]

Parameters
  • array – the torch array to normalize. Expected layout is (sample, filter, d0, … dN)

  • mean – a N-dimensional sequence

  • std – a N-dimensional sequence

  • criteria_fn – function applied on each feature. If satisfied, the feature will be transformed, if not the original feature is returned

Returns

A normalized batch such that the mean is 0 and std is 1 for the selected features

class trw.transforms.TransformCompose(transforms: Sequence[trw.transforms.transforms.Transform])

Bases: trw.transforms.transforms.Transform

Sequentially apply a list of transformations

__call__(self, batch: trw.basic_typing.Batch) trw.basic_typing.Batch
class trw.transforms.TransformAffine(translation_min_max: Sequence[numbers.Number], scaling_min_max: Sequence[numbers.Number], rotation_radian_min_max: Sequence[numbers.Number], isotropic: bool = True, criteria_fn: Optional[trw.transforms.transforms.CriteriaFn] = None, padding_mode: str = 'zeros')

Bases: trw.transforms.transforms.TransformBatchWithCriteria

Transform an image using a random affine (2D or 3D) transformation.

Only 2D or 3D supported transformation.

Notes

the scaling and rotational components of the transformation are performed relative to the image.

_transform(self, features_names, batch)
class trw.transforms.TransformCast(feature_names: Sequence[str], cast_type: str)

Bases: trw.transforms.transforms.TransformBatchWithCriteria

Cast tensors to a specified type.

Only numpy.ndarray and torch.Tensor types will be casted

class trw.transforms.TransformRandomCropResize(crop_size: trw.basic_typing.ShapeX, criteria_fn: Optional[trw.transforms.transforms.CriteriaFn] = None, resize_mode: typing_extensions.Literal[nearest, linear, none] = 'linear')

Bases: trw.transforms.transforms.TransformBatchWithCriteria

Randomly crop a tensor and resize to its original shape.

Parameters
  • crop_size – a sequence of size len(array.shape)-2 indicating the width of crop, excluding the N and C components

  • criteria_fn – function applied on each feature. If satisfied, the feature will be transformed, if not the original feature is returned

  • resize_mode – string among (‘nearest’, ‘linear’, ‘none’) specifying the resampling method

Returns

a transformed batch

class trw.transforms.TransformResizeModuloCropPad(multiple_of: Union[int, trw.basic_typing.ShapeX], criteria_fn: Optional[trw.transforms.transforms.CriteriaFn] = None, mode: typing_extensions.Literal[trw.transforms.crop, pad] = 'crop', padding_mode: typing_extensions.Literal[edge, constant, symmetric] = 'constant', padding_constant_value: int = 0)

Bases: trw.transforms.transforms.TransformBatchWithCriteria

Resize tensors by padding or cropping the tensor so its shape is a multiple of a multiple_of.

This can be particularly helpful in encoder-decoder architecture with skip connection which can impose constraints on the input shape (e.g., the input must be a multiple of 32 pixels).

Parameters
  • multiple_of – a sequence of size len(array.shape)-2 such that shape % multiple_of == 0. To achieve this, the tensors will be padded or cropped.

  • criteria_fn – function applied on each feature. If satisfied, the feature will be transformed, if not the original feature is returned

  • padding_modenumpy.pad mode. Currently supported are (‘constant’, ‘edge’, ‘symmetric’)

  • mode – one of crop, pad. If pad, the selected tensors will be padded to achieve the size tensor.shape % multiple_of == 0. If crop, the selected tensors will be cropped instead with a randomly selected cropping position.

Returns

dictionary with the selected tensors cropped or padded to the appropriate size

class trw.transforms.TransformResample(resampling_geometry: Union[trw.transforms.spatial_info.SpatialInfo, Callable[[Dict[str, trw.transforms.spatial_info.SpatialInfo]], trw.transforms.spatial_info.SpatialInfo]], get_spatial_info_from_batch_name: get_spatial_info_type, criteria_fn: trw.transforms.transforms.CriteriaFn = transforms.criteria_is_array_4_or_above, interpolation_mode: typing_extensions.Literal[linear, nearest] = 'linear', padding_mode: typing_extensions.Literal[zeros, border, reflection] = 'zeros')

Bases: trw.transforms.transforms.TransformBatchWithCriteria

Resample a tensor with spatial information (e.g., a 3D volume with origin and spacing)

trw.transforms.random_fixed_geometry_within_geometries(geometries: Dict[str, trw.transforms.spatial_info.SpatialInfo], fixed_geometry_shape: trw.basic_typing.ShapeX, fixed_geometry_spacing: trw.basic_typing.Length, geometry_selector: Callable[[Sequence[trw.transforms.spatial_info.SpatialInfo]], trw.transforms.spatial_info.SpatialInfo] = find_largest_geometry)

Place randomly a fixed geometry within the largest available geometry.

Parameters
  • geometries – a dictionary of available geometries

  • fixed_geometry_shape – the shape of the returned geometry

  • fixed_geometry_spacing – the spacing of the geometry

  • geometry_selector – select a geometry for the random geometry calculation

Returns

a geometry

trw.transforms.find_largest_geometry(geometries: Sequence[trw.transforms.spatial_info.SpatialInfo]) trw.transforms.spatial_info.SpatialInfo
class trw.transforms.TransformOneOf(transforms: List[Optional[trw.transforms.transforms.Transform]])

Bases: trw.transforms.transforms.Transform

Randomly select a transform among a set of transforms and apply it

__call__(self, batch: trw.basic_typing.Batch) trw.basic_typing.Batch
class trw.transforms.TransformRandomDeformation(control_points: Union[int, Sequence[int]] = 6, max_displacement: Optional[Union[float, Sequence[float]]] = 0.5, criteria_fn: Optional[trw.transforms.transforms.CriteriaFn] = None, interpolation: typing_extensions.Literal[linear, nearest] = 'linear', padding_mode: typing_extensions.Literal[zeros, border, reflection] = 'zeros', gaussian_filter_sigma: Optional[float] = 1.5, align_corners: bool = False)

Bases: trw.transforms.transforms.TransformBatchWithCriteria

Transform an image using a random deformation field.

Only 2D or 3D supported transformation.

The gradient can be back-propagated through this transform.

_transform(self, features_names, batch)
class trw.transforms.TransformSqueeze(axis: int, criteria_fn: Optional[trw.transforms.transforms.CriteriaFn] = criteria_is_array_4_or_above)

Bases: trw.transforms.transforms.TransformBatchWithCriteria

Squeeze a dimension of a tensor (i.e., remove one dimension of size 1 of a specifed axis)

Only numpy.ndarray and torch.Tensor types will be transformed

class trw.transforms.TransformUnsqueeze(axis: int, criteria_fn: Optional[trw.transforms.transforms.CriteriaFn] = criteria_is_array_4_or_above)

Bases: trw.transforms.transforms.TransformBatchWithCriteria

Unsqueeze a dimension of a tensor.

Only numpy.ndarray and torch.Tensor types will be transformed

class trw.transforms.TransformMoveToDevice(device: torch.device, non_blocking: bool = False, criteria_fn: Optional[trw.transforms.transforms.CriteriaFn] = None)

Bases: trw.transforms.transforms.TransformBatchWithCriteria

Move a tensor to a specified device.

Transfert from CPU to GPU can can’t significant time. This transfer time can be masked by transferring the data as part of the data preprocessing on a single GPU system.

Note

This requires to start torch using torch.multiprocessing.set_start_method(‘spawn’)

Only torch.Tensor types will be considered