BasicBlock3d(4, 64, norm_layer=partial(nn.BatchNorm3d, affine = False))
Bottleneck3d(4, 64)
Identity layer
Medical images, especially 3D images are large so batch size is limited when trainig in normal consumer hardware. This can lead to problems with the normalization layers, as performance can/will decrease for batch sizes under 32/under 8. This is discussed here, here and here. Replacing the normalization layer with an indentity layer might be a quick solution without the need to alter the whole architecture.
ResNet3D(BasicBlock3d, [2, 2, 2, 2], final_softmax = True, act_layer=nn.LeakyReLU, ps = 0.75)(torch.randn(10, 3, 8, 64, 64)).size()
model = resnet18_3d()
input = torch.rand(2, 3, 15, 80, 80)
output = model(input)
print(output.size())
model = resnet101_3d()
input = torch.rand(2, 3, 15, 64, 64)
output = model(input)
print(output.size())
m = build_backbone(resnet34_3d, 8, IdentityLayer, 5)
xb = m(torch.randn(10, 5, 10, 50, 50))
for x in xb: print(x.size())