d = pd.read_csv('../data/radiopaedia_cases.csv')
dls = ImageDataLoaders3D.from_df(d, 
                                 item_tfms = ResizeCrop3D(crop_by = (0., 0.1, 0.1), resize_to = (20, 150, 150), perc_crop = True),
                                 bs = 2, 
                                 val_bs = 2)

Helper functions

Some functions from fastai.layers are needed to construct learners (see next notebook). For this some slight modifications had to be made. The in_channel function had to be modified to also accept 3D models wich have 5D weights and the num_features_model function was adapted to pass a size tuple of len 3 instead of 2. The other functions were not changed but copied to avoid conflicts when loaded directly from fastai.
cnn_learner_3d is essentially the same function as fastais cnn_learner, just adds a new callback.

create_body[source]

create_body(arch, n_in=3, pretrained=True, cut=None)

Cut off the body of a typically pretrained `arch` as determined by `cut`

fastai create_body can adapt the number of input channels but only for 2d convolutions. With slight changes in the code of the two fastai functions, it can be adapted to work with 3d convolutions.

body_3d = create_body(resnet50_3d, pretrained=False, n_in=2)

in_channels[source]

in_channels(m)

Return the shape of the first weight layer in `m`.

in_channels from fastai only returns a result if weight.ndim == 4 but in 3D convolutional layers, it will be 5 dimensions, so the functions has to be adapted.

test_eq(in_channels(body_3d), 2)

num_features_model is unchanged, but needs to be defined here to correctly call the adapted in_channels function

num_features_model[source]

num_features_model(m)

Return the number of output features for `m`.

model_sizes[source]

model_sizes(m, size=(64, 64))

Pass a dummy input through the model `m` to get the various sizes of activations.

dummy_eval[source]

dummy_eval(m, size=(64, 64))

Evaluate `m` on a dummy input of a certain `size`
dummy_eval(body_3d).size()
model_sizes(body_3d)
[torch.Size([1, 128, 9, 21, 21]),
 torch.Size([1, 256, 9, 21, 21]),
 torch.Size([1, 512, 5, 11, 11]),
 torch.Size([1, 1024, 3, 6, 6]),
 torch.Size([1, 2048, 2, 3, 3])]
test_eq(num_features_model(body_3d), 2048)

create_cnn_model is unchanged, but needs to be redefined to correctly call num_features_model which then calls the changed in_channels function

class Concat[source]

Concat(ni, ndim, dim=1) :: Module

Same as `nn.Module`, but no need for subclasses to call `super().__init__`

fastai performes adaptive concat pooling as first step in the new header, which is adapted to 3D.

class AdaptiveConcatPool3d[source]

AdaptiveConcatPool3d(size=None) :: Module

Layer that concats `AdaptiveAvgPool3d` and `AdaptiveMaxPool3d`

create_head[source]

create_head(nf, n_out, lin_ftrs=None, ps=0.5, concat_pool=True, bn_final=False, lin_first=False, y_range=None)

Model head that takes `nf` features, runs through `lin_ftrs`, and out `n_out` classes.

create_head is the same as fastai function, but used 3d pooling.

create_cnn_model_3d[source]

create_cnn_model_3d(arch, n_out, cut=None, pretrained=True, n_in=3, init=kaiming_normal_, custom_head=None, concat_pool=True, **kwargs)

Create custom convnet architecture using `arch`, `n_in` and `n_out`. Identical to fastai func

create_cnn_model_3d is similar to create_cnn_model.

model = create_cnn_model_3d(resnet50_3d, 2, 1, pretrained = False)
model(torch.randn(2, 3, 3, 10, 10)).size()
torch.Size([2, 2])