Sparse voxel-based 3D convolutional neural networks (CNNs) are widely used for various 3D vision tasks.
Sparse voxel-based 3D CNNs create sparse non-empty voxels from the 3D input and perform 3D convolution
operations on them only. We propose a simple yet effective padding scheme --- interpolation-aware padding
to pad a few empty voxels adjacent to the non-empty voxels and involve them in the 3D CNN computation so
that all neighboring voxels exist when computing point-wise features via the trilinear interpolation. For
fine-grained 3D vision tasks where point-wise features are essential, like semantic segmentation and 3D
detection, our network achieves higher prediction accuracy than the existing networks using the nearest
neighbor interpolation or the normalized trilinear interpolation with the zero-padding or the
octree-padding scheme. Through extensive comparisons on various 3D segmentation and detection tasks, we
demonstrate the superiority of 3D sparse CNNs with our padding scheme in conjunction with feature
interpolation.