Dauer:
1 Semester | Angebotsturnus:
Jedes Wintersemester | Leistungspunkte:
4 |
Studiengang, Fachgebiet und Fachsemester: - Master Psychologie 2016 (Wahlpflicht), Wahlpflicht, Beliebiges Fachsemester
- Master Biophysik 2023 (Wahlpflicht), Wahlpflicht, Beliebiges Fachsemester
- Master Medieninformatik 2020 (Wahlpflicht), Wahlpflicht, Beliebiges Fachsemester
- Master Medizinische Ingenieurwissenschaft 2020 (Wahlpflicht), Wahlpflicht, Beliebiges Fachsemester
- Master Entrepreneurship in digitalen Technologien 2020 (Wahlpflicht), Wahlpflicht, Beliebiges Fachsemester
- Master Informatik 2019 (Wahlpflicht), Wahlpflicht, Beliebiges Fachsemester
|
Lehrveranstaltungen: - CS4295-Ü: Deep Learning (Übung, 2 SWS)
- CS4295-V: Deep Learning (Vorlesung, 2 SWS)
| Workload: - 75 Stunden Selbststudium
- 45 Stunden Präsenzstudium
| |
Lehrinhalte: | - Foundations and Deep Learning Basics (Learning Paradigms, Classification and Regression, Underfitting and Overfitting)
- Shallow Neural Networks (Basic Neuron Model, Multilayer Perceptions, Backpropagation, Computational Graphs, Universal Approximation Theorem, No-Free Lunch Theorems, Inductive Biases)
- Optimization (Stochastic Gradient Descent, Momentum Variants, Adaptive Optimizer)
- Convolutional Neural Networks (1D Convolution, 2D Convolution, 3D Convolution, ReLUs and Variants, Down and Up Sampling Techniques, Transposed Convolution)
- Regularization (Early Stopping, L1 and L2 Regularization, Label Smoothing, Dropout Strategies, Batch Normalization)
- Very Deep Networks (Highway Networks, Residual Blocks, ResNet Variants, DenseNets)
- Dimensionality Reduction (PCA, t-SNE, UMAP, Autoencoder)
- Generative Neural Networks (Variational Autoencoder, Generative Adversarial Networks, Diffusion Models)
- Graph Neural Networks (Graph Convolutional Networks, Graph Attention Networks)
- Fooling Deep Neural Networks (Adversarial Attacks, White Box and Black Box Attacks, One-Pixel Attacks)
- Physics-Aware Deep Learning (Physical Knowledge as Inductive Bias, PINN, PhyDNet, Neural ODE, FINN)
| |
Qualifikationsziele/Kompetenzen: - Students get a fundamental understanding deep learning basics such as backpropagation, computational graphs, and auto-differentiation
- Students understand the implications of inductive biases
- Students get a comprehensive understanding of most relevant deep learning approaches
- Students learn to analyze the challenges in deep learning tasks and to identify well-suited approaches to solve them
- Students will understand the pros and cons of various deep learning models
- Students know how to analyze the models and results, to improve the model parameters, and to interpret the model predictions and their relevance
|
Vergabe von Leistungspunkten und Benotung durch: - Klausur oder mündliche Prüfung nach Maßgabe des Dozenten
|
Modulverantwortlicher: Lehrende: - MitarbeiterInnen des Instituts
- Prof. Dr. Sebastian Otte
|
Literatur: - Goodfellow, I., Bengio, Y., & Courville, A. (2016): Deep Learning - MIT Press. ISBN 978-0262035613
- Prince, S. J. D. (2023): Understanding Deep Learning - The MIT Press. ISBN 978-0262048644
- Deisenroth, M. P., Faisal, A. A., & Ong, C. S. (2020): Mathematics for Machine Learning - Cambridge University Press, 2020. ISBN 978-1108470049
- Bishop, C. M. (2006): Pattern Recognition and Machine Learning - Springer. ISBN 978-0387310732
- Recent publications on the related topics:
|
Sprache: - Wird nur auf Englisch angeboten
|
Bemerkungen:Admission requirements for taking the module: - None Admission requirements for participation in module examination(s): - Successful completion of exercise assignments as specified at the beginning of the semester Module Exam(s): - CS4295-L1: Deep Learning, exam, 90 min Laut Beschluss des Prüfungsausschusses Informatik vom 19.8.2024 kann dieses Modul von Studierenden Master Informatik SGO ab 2019 im Bereich 5. Wahlpflichtfach gewählt werden. |
Letzte Änderung: 23.9.2024 |
für die Ukraine