Principles of Neural Network Architecture Design - Invertibility and Domain Knowledge
Veröffentlichungsdatum
2019-10-30
Autoren
Betreuer
Gutachter
Zusammenfassung
Neural networks architectures allow a tremendous variety of design choices. In this work, we study two principles underlying these architectures: First, the design and application of invertible neural networks (INNs). Second, the incorporation of domain knowledge into neural network architectures. After introducing the mathematical foundations of deep learning, we address the invertibility of standard feedforward neural networks from a mathematical perspective. These results serve as a motivation for our proposed invertible residual networks (i-ResNets). This architecture class is then studied in two scenarios: First, we propose ways to use i-ResNets as a normalizing flow and demonstrate the applicability for high-dimensional generative modeling. Second, we study the excessive invariance of common deep image classifiers and discuss consequences for adversarial robustness. We finish with a study of convolutional neural networks for tumor classification based on imaging mass spectrometry (IMS) data. For this application, we propose an adapted architecture guided by our knowledge of the domain of IMS data and show its superior performance on two challenging tumor classification datasets.
Schlagwörter
Deep Learning
;
Invertible Neural Networks
;
Adversarial Examples
;
Imaging Mass Spectrometry
;
Normalizing Flows
Institution
Fachbereich
Dokumenttyp
Dissertation
Zweitveröffentlichung
Nein
Sprache
Englisch
Dateien![Vorschaubild]()
Lade...
Name
00108536-1.pdf
Size
6.75 MB
Format
Adobe PDF
Checksum
(MD5):be63b15605d8220a08e21ef49fdb57d2