ABSTRACT: Identifying an artificial neural network is an NP-hard problem in general. In this talk we address conditions of exact identification of one and two hidden layer totally connected feed forward neural networks by means of a number of samples, which scales polynomially with the dimension of the input and network size. The exact identification is obtained by computing second order approximate strong or weak differentials of the network and their unique and stable decomposition into nonorthogonal rank-1 terms. The procedure combines several novel matrix optimization algorithms over the space of second order differentials. As a byproduct we introduce a new whitening procedure for matrices, which allows the stable decomposition of symmetric matrices into nonorthogonal rank-1 decompositions, by reducing the problem to the standard orthonormal decomposition case. We show that this algorithm practically achieve information theoretical recovery bounds. We illustrate the results by several numerical experiments.
M. Fornasier, J. Vybíral and I. Daubechies. Robust and Resource Efficient Identification of Shallow Neural Networks by Fewest Samples, arXiv:1804.01592
M. Fornasier, T. Klock, M. Rauchensteiner Robust and resource efficient identification of two hidden layer neural networks, arXiv:1907.00485
M. Fornasier, K. Schnass and J. Vybiral. Learning functions of few arbitrary linear parameters in high dimensions, Found. Comput. Math., 12(2):229-262, 2012.