Understanding Neural Networks via Polyhedral Geometry - Christoph Hertrich
Understanding Neural Networks via Polyhedral Geometry
Neural networks with rectified linear unit (ReLU) activations are one of
 the standard models in modern machine learning. Despite their practical
 importance, fundamental theoretical questions concerning ReLU networks 
remain open until today. For instance, what
 is the precise set of (piecewise linear) functions representable by 
ReLU networks with a given depth? And what functions can we represent 
with polynomial-size neural networks? In this talk I will explain how we
 can use techniques from polyhedral geometry and
 combinatorial optimization to make progress towards resolving these 
questions.
Based on joint works with Amitabh Basu, Marco Di Summa, Christian Haase, Georg Loho, Leon Sering, and Martin Skutella.

