Deep Neural Networks (DNNs) with shape prior: volume preservation, Star-shapes and convex shape representation by DNNs
Speaker: Xue-Cheng Tai, Department of Mathematics, Hong Kong Baptist University.
Convex Shapes and star-shapes are common in our daily life. It is important to design proper techniques to represent these shapes. We use Deep Neural Convolution Networks (DCNN) to segment objects from images or videos. So far, it is still a problem to guarantee that the output objects from a DCNN are convex shapes or star-shapes. In this work, we propose a technique which can be easily integrated into the commonly used DCNNs for image segmentation and guarantee that outputs are convex/star shapes. We can also incorporate volume preservation into the networks. This idea can also be used for other shape representation applications include 3D surface and shape reconstructions. The method is flexible and it can handle multiple objects and even allow some of the objects to be non-convex/non-star-shape. Our method is based on the dual representation of the sigmoid activation function in DCNNs.
In the dual space, the shape priors can be guaranteed by a simple quadratic constraint on a binary representation of the shapes. Moreover, our method can also integrate spatial regularization and some other shape prior using a soft thresholding dynamics (STD) method. The regularization can make the boundary curves of the segmentation objects to be simultaneously smooth and convex-shape/star-shape. We design a very stable active set projection algorithm to numerically solve our model.
This algorithm can form a new plug-and-play DCNN layer called
CS-STD whose outputs must be a nearly binary segmentation of convex-shape/star-shape objects.
As an application example, we apply the convex-shape/star-shape prior layer to the retinal images segmentation by taking the popular DeepLabV3+ as a backbone network. Experimental results on several public datasets show that our method is efficient and outperform the classical DCNN segmentation methods.
This talk is based on joint works with Jun Liu, S. Luo and X. Wang.