During the past decade, increased availability of computational resources and data as well as advances in machine learning technologies, in particular deep learning methods, have led to tremendous performance improvements in various machine learning applications such as computer vision, speech recognition, and natural language processing. However, the theoretical understanding of these new machine learning technologies has not advanced at the same pace. Research in the Communication Theory Group aims at closing this gap by developing and improving theoretical foundations for modern machine learning methods. Recent concrete research topics include deep neural networks, matrix completion, super-resolution of signals, subspace clustering, and sparse signal recovery. Of particular interest are fundamental performance limits which characterize the performance of the best possible algorithm for the task at hand. Knowing these limits is crucial as machine learning technologies are increasingly emerging in safety-critical applications such as healthcare or finance.